2025-05-25 00:00:07.193864 | Job console starting 2025-05-25 00:00:07.208655 | Updating git repos 2025-05-25 00:00:07.271854 | Cloning repos into workspace 2025-05-25 00:00:07.451967 | Restoring repo states 2025-05-25 00:00:07.479148 | Merging changes 2025-05-25 00:00:07.479175 | Checking out repos 2025-05-25 00:00:07.949383 | Preparing playbooks 2025-05-25 00:00:08.612354 | Running Ansible setup 2025-05-25 00:00:13.286789 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-25 00:00:14.600864 | 2025-05-25 00:00:14.601024 | PLAY [Base pre] 2025-05-25 00:00:14.618247 | 2025-05-25 00:00:14.618371 | TASK [Setup log path fact] 2025-05-25 00:00:14.647838 | orchestrator | ok 2025-05-25 00:00:14.677658 | 2025-05-25 00:00:14.677858 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-25 00:00:14.719814 | orchestrator | ok 2025-05-25 00:00:14.736012 | 2025-05-25 00:00:14.736165 | TASK [emit-job-header : Print job information] 2025-05-25 00:00:14.788700 | # Job Information 2025-05-25 00:00:14.788940 | Ansible Version: 2.16.14 2025-05-25 00:00:14.788985 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-25 00:00:14.789020 | Pipeline: periodic-midnight 2025-05-25 00:00:14.789043 | Executor: 521e9411259a 2025-05-25 00:00:14.789064 | Triggered by: https://github.com/osism/testbed 2025-05-25 00:00:14.789110 | Event ID: 444f835cb2d2478a8534105ed96e1b6a 2025-05-25 00:00:14.799986 | 2025-05-25 00:00:14.800143 | LOOP [emit-job-header : Print node information] 2025-05-25 00:00:14.944588 | orchestrator | ok: 2025-05-25 00:00:14.944850 | orchestrator | # Node Information 2025-05-25 00:00:14.944888 | orchestrator | Inventory Hostname: orchestrator 2025-05-25 00:00:14.944913 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-25 00:00:14.944936 | orchestrator | Username: zuul-testbed01 2025-05-25 00:00:14.944957 | orchestrator | Distro: Debian 12.11 2025-05-25 00:00:14.944987 | orchestrator | Provider: static-testbed 2025-05-25 00:00:14.945012 | orchestrator | Region: 2025-05-25 00:00:14.945034 | orchestrator | Label: testbed-orchestrator 2025-05-25 00:00:14.945071 | orchestrator | Product Name: OpenStack Nova 2025-05-25 00:00:14.945092 | orchestrator | Interface IP: 81.163.193.140 2025-05-25 00:00:14.960419 | 2025-05-25 00:00:14.960551 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-25 00:00:15.456735 | orchestrator -> localhost | changed 2025-05-25 00:00:15.464644 | 2025-05-25 00:00:15.464755 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-25 00:00:16.800292 | orchestrator -> localhost | changed 2025-05-25 00:00:16.828680 | 2025-05-25 00:00:16.828835 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-25 00:00:17.190890 | orchestrator -> localhost | ok 2025-05-25 00:00:17.204719 | 2025-05-25 00:00:17.205188 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-25 00:00:17.255222 | orchestrator | ok 2025-05-25 00:00:17.285267 | orchestrator | included: /var/lib/zuul/builds/f9c00a01ee164987837fb5c6a0b88135/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-25 00:00:17.296780 | 2025-05-25 00:00:17.296976 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-25 00:00:20.817934 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-25 00:00:20.818225 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/f9c00a01ee164987837fb5c6a0b88135/work/f9c00a01ee164987837fb5c6a0b88135_id_rsa 2025-05-25 00:00:20.818284 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/f9c00a01ee164987837fb5c6a0b88135/work/f9c00a01ee164987837fb5c6a0b88135_id_rsa.pub 2025-05-25 00:00:20.818324 | orchestrator -> localhost | The key fingerprint is: 2025-05-25 00:00:20.818362 | orchestrator -> localhost | SHA256:BpgnK+vOyTxQUTWkPoM2+DUaUTW2d/1smRQC6uHeBjs zuul-build-sshkey 2025-05-25 00:00:20.818398 | orchestrator -> localhost | The key's randomart image is: 2025-05-25 00:00:20.818449 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-25 00:00:20.818486 | orchestrator -> localhost | | .o+B ... . | 2025-05-25 00:00:20.818523 | orchestrator -> localhost | | .. = + . . . . | 2025-05-25 00:00:20.818558 | orchestrator -> localhost | | ..= + + . . . | 2025-05-25 00:00:20.818591 | orchestrator -> localhost | | ..+ + = o + o | 2025-05-25 00:00:20.818625 | orchestrator -> localhost | |..* B S * | 2025-05-25 00:00:20.818696 | orchestrator -> localhost | |.o B + o + . | 2025-05-25 00:00:20.818733 | orchestrator -> localhost | | .+ E o | 2025-05-25 00:00:20.818763 | orchestrator -> localhost | | =.. o | 2025-05-25 00:00:20.818794 | orchestrator -> localhost | | .B. | 2025-05-25 00:00:20.818917 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-25 00:00:20.819013 | orchestrator -> localhost | ok: Runtime: 0:00:02.895963 2025-05-25 00:00:20.827004 | 2025-05-25 00:00:20.827103 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-25 00:00:20.863571 | orchestrator | ok 2025-05-25 00:00:20.877781 | orchestrator | included: /var/lib/zuul/builds/f9c00a01ee164987837fb5c6a0b88135/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-25 00:00:20.890728 | 2025-05-25 00:00:20.891060 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-25 00:00:20.926704 | orchestrator | skipping: Conditional result was False 2025-05-25 00:00:20.938147 | 2025-05-25 00:00:20.938342 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-25 00:00:21.569201 | orchestrator | changed 2025-05-25 00:00:21.578939 | 2025-05-25 00:00:21.579104 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-25 00:00:21.830581 | orchestrator | ok 2025-05-25 00:00:21.843389 | 2025-05-25 00:00:21.843512 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-25 00:00:22.279516 | orchestrator | ok 2025-05-25 00:00:22.289599 | 2025-05-25 00:00:22.290194 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-25 00:00:22.689524 | orchestrator | ok 2025-05-25 00:00:22.695919 | 2025-05-25 00:00:22.696033 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-25 00:00:22.710588 | orchestrator | skipping: Conditional result was False 2025-05-25 00:00:22.718087 | 2025-05-25 00:00:22.718201 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-25 00:00:23.121290 | orchestrator -> localhost | changed 2025-05-25 00:00:23.134512 | 2025-05-25 00:00:23.134621 | TASK [add-build-sshkey : Add back temp key] 2025-05-25 00:00:23.433520 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/f9c00a01ee164987837fb5c6a0b88135/work/f9c00a01ee164987837fb5c6a0b88135_id_rsa (zuul-build-sshkey) 2025-05-25 00:00:23.433734 | orchestrator -> localhost | ok: Runtime: 0:00:00.014078 2025-05-25 00:00:23.440718 | 2025-05-25 00:00:23.440806 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-25 00:00:23.894953 | orchestrator | ok 2025-05-25 00:00:23.904324 | 2025-05-25 00:00:23.904522 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-25 00:00:23.928523 | orchestrator | skipping: Conditional result was False 2025-05-25 00:00:23.986824 | 2025-05-25 00:00:23.986970 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-25 00:00:24.413418 | orchestrator | ok 2025-05-25 00:00:24.427970 | 2025-05-25 00:00:24.430529 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-25 00:00:24.478728 | orchestrator | ok 2025-05-25 00:00:24.488763 | 2025-05-25 00:00:24.488875 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-25 00:00:25.039768 | orchestrator -> localhost | ok 2025-05-25 00:00:25.046957 | 2025-05-25 00:00:25.047060 | TASK [validate-host : Collect information about the host] 2025-05-25 00:00:26.369638 | orchestrator | ok 2025-05-25 00:00:26.403842 | 2025-05-25 00:00:26.403965 | TASK [validate-host : Sanitize hostname] 2025-05-25 00:00:26.536783 | orchestrator | ok 2025-05-25 00:00:26.554235 | 2025-05-25 00:00:26.554428 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-25 00:00:27.623766 | orchestrator -> localhost | changed 2025-05-25 00:00:27.629924 | 2025-05-25 00:00:27.630023 | TASK [validate-host : Collect information about zuul worker] 2025-05-25 00:00:28.233556 | orchestrator | ok 2025-05-25 00:00:28.250346 | 2025-05-25 00:00:28.250533 | TASK [validate-host : Write out all zuul information for each host] 2025-05-25 00:00:29.228450 | orchestrator -> localhost | changed 2025-05-25 00:00:29.257525 | 2025-05-25 00:00:29.257649 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-25 00:00:29.549281 | orchestrator | ok 2025-05-25 00:00:29.554917 | 2025-05-25 00:00:29.555004 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-25 00:00:45.527985 | orchestrator | changed: 2025-05-25 00:00:45.528244 | orchestrator | .d..t...... src/ 2025-05-25 00:00:45.528280 | orchestrator | .d..t...... src/github.com/ 2025-05-25 00:00:45.528305 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-25 00:00:45.528326 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-25 00:00:45.528347 | orchestrator | RedHat.yml 2025-05-25 00:00:45.540696 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-25 00:00:45.540714 | orchestrator | RedHat.yml 2025-05-25 00:00:45.540766 | orchestrator | = 2.2.0"... 2025-05-25 00:00:59.350456 | orchestrator | 00:00:59.350 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-25 00:00:59.423963 | orchestrator | 00:00:59.423 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-05-25 00:01:01.123086 | orchestrator | 00:01:01.122 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-25 00:01:02.121439 | orchestrator | 00:01:02.121 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-25 00:01:03.615910 | orchestrator | 00:01:03.615 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-25 00:01:04.586958 | orchestrator | 00:01:04.586 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-25 00:01:06.060153 | orchestrator | 00:01:06.059 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-05-25 00:01:07.392603 | orchestrator | 00:01:07.385 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-05-25 00:01:07.392719 | orchestrator | 00:01:07.386 STDOUT terraform: Providers are signed by their developers. 2025-05-25 00:01:07.392736 | orchestrator | 00:01:07.386 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-25 00:01:07.392748 | orchestrator | 00:01:07.386 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-25 00:01:07.392759 | orchestrator | 00:01:07.386 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-25 00:01:07.392777 | orchestrator | 00:01:07.386 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-25 00:01:07.392792 | orchestrator | 00:01:07.386 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-25 00:01:07.392803 | orchestrator | 00:01:07.386 STDOUT terraform: you run "tofu init" in the future. 2025-05-25 00:01:07.392813 | orchestrator | 00:01:07.386 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-25 00:01:07.392823 | orchestrator | 00:01:07.387 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-25 00:01:07.392833 | orchestrator | 00:01:07.387 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-25 00:01:07.392844 | orchestrator | 00:01:07.387 STDOUT terraform: should now work. 2025-05-25 00:01:07.392854 | orchestrator | 00:01:07.387 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-25 00:01:07.392864 | orchestrator | 00:01:07.387 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-25 00:01:07.392874 | orchestrator | 00:01:07.387 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-25 00:01:07.578385 | orchestrator | 00:01:07.578 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-05-25 00:01:07.778167 | orchestrator | 00:01:07.777 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-25 00:01:07.778277 | orchestrator | 00:01:07.778 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-25 00:01:07.778381 | orchestrator | 00:01:07.778 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-25 00:01:07.778402 | orchestrator | 00:01:07.778 STDOUT terraform: for this configuration. 2025-05-25 00:01:08.053916 | orchestrator | 00:01:08.053 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-05-25 00:01:08.168087 | orchestrator | 00:01:08.167 STDOUT terraform: ci.auto.tfvars 2025-05-25 00:01:08.172567 | orchestrator | 00:01:08.172 STDOUT terraform: default_custom.tf 2025-05-25 00:01:08.365333 | orchestrator | 00:01:08.365 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-05-25 00:01:09.376813 | orchestrator | 00:01:09.376 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-25 00:01:09.914414 | orchestrator | 00:01:09.914 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-25 00:01:10.097081 | orchestrator | 00:01:10.096 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-25 00:01:10.097231 | orchestrator | 00:01:10.096 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-25 00:01:10.097251 | orchestrator | 00:01:10.097 STDOUT terraform:  + create 2025-05-25 00:01:10.097264 | orchestrator | 00:01:10.097 STDOUT terraform:  <= read (data resources) 2025-05-25 00:01:10.097276 | orchestrator | 00:01:10.097 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-25 00:01:10.097445 | orchestrator | 00:01:10.097 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-25 00:01:10.097468 | orchestrator | 00:01:10.097 STDOUT terraform:  # (config refers to values not yet known) 2025-05-25 00:01:10.097554 | orchestrator | 00:01:10.097 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-25 00:01:10.097689 | orchestrator | 00:01:10.097 STDOUT terraform:  + checksum = (known after apply) 2025-05-25 00:01:10.097740 | orchestrator | 00:01:10.097 STDOUT terraform:  + created_at = (known after apply) 2025-05-25 00:01:10.098057 | orchestrator | 00:01:10.097 STDOUT terraform:  + file = (known after apply) 2025-05-25 00:01:10.098162 | orchestrator | 00:01:10.097 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.098195 | orchestrator | 00:01:10.097 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.098208 | orchestrator | 00:01:10.097 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-25 00:01:10.098220 | orchestrator | 00:01:10.098 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-25 00:01:10.098231 | orchestrator | 00:01:10.098 STDOUT terraform:  + most_recent = true 2025-05-25 00:01:10.098272 | orchestrator | 00:01:10.098 STDOUT terraform:  + name = (known after apply) 2025-05-25 00:01:10.098339 | orchestrator | 00:01:10.098 STDOUT terraform:  + protected = (known after apply) 2025-05-25 00:01:10.098435 | orchestrator | 00:01:10.098 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.098515 | orchestrator | 00:01:10.098 STDOUT terraform:  + schema = (known after apply) 2025-05-25 00:01:10.098579 | orchestrator | 00:01:10.098 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-25 00:01:10.098695 | orchestrator | 00:01:10.098 STDOUT terraform:  + tags = (known after apply) 2025-05-25 00:01:10.098770 | orchestrator | 00:01:10.098 STDOUT terraform:  + updated_at = (known after apply) 2025-05-25 00:01:10.098789 | orchestrator | 00:01:10.098 STDOUT terraform:  } 2025-05-25 00:01:10.098975 | orchestrator | 00:01:10.098 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-25 00:01:10.098997 | orchestrator | 00:01:10.098 STDOUT terraform:  # (config refers to values not yet known) 2025-05-25 00:01:10.099086 | orchestrator | 00:01:10.098 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-25 00:01:10.099181 | orchestrator | 00:01:10.099 STDOUT terraform:  + checksum = (known after apply) 2025-05-25 00:01:10.099284 | orchestrator | 00:01:10.099 STDOUT terraform:  + created_at = (known after apply) 2025-05-25 00:01:10.099319 | orchestrator | 00:01:10.099 STDOUT terraform:  + file = (known after apply) 2025-05-25 00:01:10.099406 | orchestrator | 00:01:10.099 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.099493 | orchestrator | 00:01:10.099 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.099600 | orchestrator | 00:01:10.099 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-25 00:01:10.099711 | orchestrator | 00:01:10.099 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-25 00:01:10.099730 | orchestrator | 00:01:10.099 STDOUT terraform:  + most_recent = true 2025-05-25 00:01:10.099820 | orchestrator | 00:01:10.099 STDOUT terraform:  + name = (known after apply) 2025-05-25 00:01:10.099913 | orchestrator | 00:01:10.099 STDOUT terraform:  + protected = (known after apply) 2025-05-25 00:01:10.100018 | orchestrator | 00:01:10.099 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.100094 | orchestrator | 00:01:10.099 STDOUT terraform:  + schema = (known after apply) 2025-05-25 00:01:10.100173 | orchestrator | 00:01:10.100 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-25 00:01:10.100250 | orchestrator | 00:01:10.100 STDOUT terraform:  + tags = (known after apply) 2025-05-25 00:01:10.100332 | orchestrator | 00:01:10.100 STDOUT terraform:  + updated_at = (known after apply) 2025-05-25 00:01:10.100350 | orchestrator | 00:01:10.100 STDOUT terraform:  } 2025-05-25 00:01:10.100453 | orchestrator | 00:01:10.100 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-25 00:01:10.100541 | orchestrator | 00:01:10.100 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-25 00:01:10.100662 | orchestrator | 00:01:10.100 STDOUT terraform:  + content = (known after apply) 2025-05-25 00:01:10.100751 | orchestrator | 00:01:10.100 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-25 00:01:10.100853 | orchestrator | 00:01:10.100 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-25 00:01:10.100955 | orchestrator | 00:01:10.100 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-25 00:01:10.101068 | orchestrator | 00:01:10.100 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-25 00:01:10.101170 | orchestrator | 00:01:10.101 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-25 00:01:10.101273 | orchestrator | 00:01:10.101 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-25 00:01:10.101292 | orchestrator | 00:01:10.101 STDOUT terraform:  + directory_permission = "0777" 2025-05-25 00:01:10.101376 | orchestrator | 00:01:10.101 STDOUT terraform:  + file_permission = "0644" 2025-05-25 00:01:10.101480 | orchestrator | 00:01:10.101 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-25 00:01:10.101587 | orchestrator | 00:01:10.101 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.101602 | orchestrator | 00:01:10.101 STDOUT terraform:  } 2025-05-25 00:01:10.101714 | orchestrator | 00:01:10.101 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-25 00:01:10.101735 | orchestrator | 00:01:10.101 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-25 00:01:10.101929 | orchestrator | 00:01:10.101 STDOUT terraform:  + content = (known after apply) 2025-05-25 00:01:10.102053 | orchestrator | 00:01:10.101 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-25 00:01:10.102073 | orchestrator | 00:01:10.101 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-25 00:01:10.102149 | orchestrator | 00:01:10.102 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-25 00:01:10.102248 | orchestrator | 00:01:10.102 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-25 00:01:10.102346 | orchestrator | 00:01:10.102 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-25 00:01:10.102444 | orchestrator | 00:01:10.102 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-25 00:01:10.102514 | orchestrator | 00:01:10.102 STDOUT terraform:  + directory_permission = "0777" 2025-05-25 00:01:10.102610 | orchestrator | 00:01:10.102 STDOUT terraform:  + file_permission = "0644" 2025-05-25 00:01:10.102718 | orchestrator | 00:01:10.102 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-25 00:01:10.102826 | orchestrator | 00:01:10.102 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.102845 | orchestrator | 00:01:10.102 STDOUT terraform:  } 2025-05-25 00:01:10.102933 | orchestrator | 00:01:10.102 STDOUT terraform:  # local_file.inventory will be created 2025-05-25 00:01:10.102951 | orchestrator | 00:01:10.102 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-25 00:01:10.103056 | orchestrator | 00:01:10.102 STDOUT terraform:  + content = (known after apply) 2025-05-25 00:01:10.103140 | orchestrator | 00:01:10.103 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-25 00:01:10.103225 | orchestrator | 00:01:10.103 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-25 00:01:10.103317 | orchestrator | 00:01:10.103 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-25 00:01:10.103432 | orchestrator | 00:01:10.103 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-25 00:01:10.103495 | orchestrator | 00:01:10.103 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-25 00:01:10.103578 | orchestrator | 00:01:10.103 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-25 00:01:10.103637 | orchestrator | 00:01:10.103 STDOUT terraform:  + directory_permission = "0777" 2025-05-25 00:01:10.103712 | orchestrator | 00:01:10.103 STDOUT terraform:  + file_permission = "0644" 2025-05-25 00:01:10.103787 | orchestrator | 00:01:10.103 STDOUT terraform:  + filename = "inventory.ci" 2025-05-25 00:01:10.103880 | orchestrator | 00:01:10.103 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.103898 | orchestrator | 00:01:10.103 STDOUT terraform:  } 2025-05-25 00:01:10.103975 | orchestrator | 00:01:10.103 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-25 00:01:10.104047 | orchestrator | 00:01:10.103 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-25 00:01:10.104123 | orchestrator | 00:01:10.104 STDOUT terraform:  + content = (sensitive value) 2025-05-25 00:01:10.104212 | orchestrator | 00:01:10.104 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-25 00:01:10.104301 | orchestrator | 00:01:10.104 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-25 00:01:10.104383 | orchestrator | 00:01:10.104 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-25 00:01:10.104472 | orchestrator | 00:01:10.104 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-25 00:01:10.104600 | orchestrator | 00:01:10.104 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-25 00:01:10.104663 | orchestrator | 00:01:10.104 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-25 00:01:10.104742 | orchestrator | 00:01:10.104 STDOUT terraform:  + directory_permission = "0700" 2025-05-25 00:01:10.104801 | orchestrator | 00:01:10.104 STDOUT terraform:  + file_permission = "0600" 2025-05-25 00:01:10.104876 | orchestrator | 00:01:10.104 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-25 00:01:10.104965 | orchestrator | 00:01:10.104 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.104983 | orchestrator | 00:01:10.104 STDOUT terraform:  } 2025-05-25 00:01:10.105062 | orchestrator | 00:01:10.104 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-25 00:01:10.105135 | orchestrator | 00:01:10.105 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-25 00:01:10.105187 | orchestrator | 00:01:10.105 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.105204 | orchestrator | 00:01:10.105 STDOUT terraform:  } 2025-05-25 00:01:10.105315 | orchestrator | 00:01:10.105 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-25 00:01:10.105409 | orchestrator | 00:01:10.105 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-25 00:01:10.105483 | orchestrator | 00:01:10.105 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.105535 | orchestrator | 00:01:10.105 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.105617 | orchestrator | 00:01:10.105 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.105701 | orchestrator | 00:01:10.105 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.105767 | orchestrator | 00:01:10.105 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.105857 | orchestrator | 00:01:10.105 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-25 00:01:10.105971 | orchestrator | 00:01:10.105 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.105987 | orchestrator | 00:01:10.105 STDOUT terraform:  + size = 80 2025-05-25 00:01:10.106003 | orchestrator | 00:01:10.105 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.106082 | orchestrator | 00:01:10.105 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.106102 | orchestrator | 00:01:10.106 STDOUT terraform:  } 2025-05-25 00:01:10.106196 | orchestrator | 00:01:10.106 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-25 00:01:10.106291 | orchestrator | 00:01:10.106 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-25 00:01:10.106365 | orchestrator | 00:01:10.106 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.106414 | orchestrator | 00:01:10.106 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.106488 | orchestrator | 00:01:10.106 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.106561 | orchestrator | 00:01:10.106 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.106634 | orchestrator | 00:01:10.106 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.106806 | orchestrator | 00:01:10.106 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-25 00:01:10.106857 | orchestrator | 00:01:10.106 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.106872 | orchestrator | 00:01:10.106 STDOUT terraform:  + size = 80 2025-05-25 00:01:10.106887 | orchestrator | 00:01:10.106 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.106936 | orchestrator | 00:01:10.106 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.106952 | orchestrator | 00:01:10.106 STDOUT terraform:  } 2025-05-25 00:01:10.107051 | orchestrator | 00:01:10.106 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-25 00:01:10.107143 | orchestrator | 00:01:10.107 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-25 00:01:10.107224 | orchestrator | 00:01:10.107 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.107264 | orchestrator | 00:01:10.107 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.107339 | orchestrator | 00:01:10.107 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.107420 | orchestrator | 00:01:10.107 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.107492 | orchestrator | 00:01:10.107 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.107597 | orchestrator | 00:01:10.107 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-25 00:01:10.107710 | orchestrator | 00:01:10.107 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.107752 | orchestrator | 00:01:10.107 STDOUT terraform:  + size = 80 2025-05-25 00:01:10.107798 | orchestrator | 00:01:10.107 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.113772 | orchestrator | 00:01:10.107 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.113827 | orchestrator | 00:01:10.107 STDOUT terraform:  } 2025-05-25 00:01:10.113833 | orchestrator | 00:01:10.107 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-25 00:01:10.113838 | orchestrator | 00:01:10.108 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-25 00:01:10.113842 | orchestrator | 00:01:10.108 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.113847 | orchestrator | 00:01:10.108 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.113851 | orchestrator | 00:01:10.108 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.113855 | orchestrator | 00:01:10.108 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.113859 | orchestrator | 00:01:10.108 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.113862 | orchestrator | 00:01:10.108 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-25 00:01:10.113866 | orchestrator | 00:01:10.108 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.113870 | orchestrator | 00:01:10.108 STDOUT terraform:  + size = 80 2025-05-25 00:01:10.113874 | orchestrator | 00:01:10.108 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.113880 | orchestrator | 00:01:10.108 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.113884 | orchestrator | 00:01:10.108 STDOUT terraform:  } 2025-05-25 00:01:10.113888 | orchestrator | 00:01:10.108 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-25 00:01:10.113892 | orchestrator | 00:01:10.108 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-25 00:01:10.113896 | orchestrator | 00:01:10.108 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.113899 | orchestrator | 00:01:10.108 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.113903 | orchestrator | 00:01:10.108 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.113907 | orchestrator | 00:01:10.108 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.113911 | orchestrator | 00:01:10.108 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.113915 | orchestrator | 00:01:10.109 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-25 00:01:10.113918 | orchestrator | 00:01:10.109 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.113934 | orchestrator | 00:01:10.109 STDOUT terraform:  + size = 80 2025-05-25 00:01:10.113938 | orchestrator | 00:01:10.109 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.113942 | orchestrator | 00:01:10.109 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.113946 | orchestrator | 00:01:10.109 STDOUT terraform:  } 2025-05-25 00:01:10.113949 | orchestrator | 00:01:10.109 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-25 00:01:10.113953 | orchestrator | 00:01:10.109 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-25 00:01:10.113957 | orchestrator | 00:01:10.109 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.113961 | orchestrator | 00:01:10.109 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.113965 | orchestrator | 00:01:10.109 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.113969 | orchestrator | 00:01:10.109 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.113973 | orchestrator | 00:01:10.109 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.113977 | orchestrator | 00:01:10.109 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-25 00:01:10.113990 | orchestrator | 00:01:10.109 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.113994 | orchestrator | 00:01:10.109 STDOUT terraform:  + size = 80 2025-05-25 00:01:10.114004 | orchestrator | 00:01:10.109 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.114008 | orchestrator | 00:01:10.109 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.114031 | orchestrator | 00:01:10.109 STDOUT terraform:  } 2025-05-25 00:01:10.114036 | orchestrator | 00:01:10.109 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-25 00:01:10.114040 | orchestrator | 00:01:10.110 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-25 00:01:10.114044 | orchestrator | 00:01:10.110 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.114048 | orchestrator | 00:01:10.110 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.114052 | orchestrator | 00:01:10.110 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.114056 | orchestrator | 00:01:10.110 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.114060 | orchestrator | 00:01:10.110 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.114063 | orchestrator | 00:01:10.110 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-25 00:01:10.114067 | orchestrator | 00:01:10.110 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.114071 | orchestrator | 00:01:10.110 STDOUT terraform:  + size = 80 2025-05-25 00:01:10.114075 | orchestrator | 00:01:10.110 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.114079 | orchestrator | 00:01:10.110 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.114082 | orchestrator | 00:01:10.110 STDOUT terraform:  } 2025-05-25 00:01:10.114091 | orchestrator | 00:01:10.110 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-25 00:01:10.114095 | orchestrator | 00:01:10.110 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 00:01:10.114098 | orchestrator | 00:01:10.110 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.114102 | orchestrator | 00:01:10.110 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.114106 | orchestrator | 00:01:10.110 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.114110 | orchestrator | 00:01:10.110 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.114113 | orchestrator | 00:01:10.111 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-25 00:01:10.114117 | orchestrator | 00:01:10.111 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.114121 | orchestrator | 00:01:10.111 STDOUT terraform:  + size = 20 2025-05-25 00:01:10.114125 | orchestrator | 00:01:10.111 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.114129 | orchestrator | 00:01:10.111 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.114132 | orchestrator | 00:01:10.111 STDOUT terraform:  } 2025-05-25 00:01:10.114136 | orchestrator | 00:01:10.111 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-25 00:01:10.114143 | orchestrator | 00:01:10.111 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 00:01:10.114147 | orchestrator | 00:01:10.111 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.114151 | orchestrator | 00:01:10.111 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.114154 | orchestrator | 00:01:10.111 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.114158 | orchestrator | 00:01:10.111 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.114162 | orchestrator | 00:01:10.111 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-25 00:01:10.114169 | orchestrator | 00:01:10.111 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.114173 | orchestrator | 00:01:10.111 STDOUT terraform:  + size = 20 2025-05-25 00:01:10.114177 | orchestrator | 00:01:10.111 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.114183 | orchestrator | 00:01:10.111 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.114187 | orchestrator | 00:01:10.111 STDOUT terraform:  } 2025-05-25 00:01:10.114190 | orchestrator | 00:01:10.111 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-25 00:01:10.114194 | orchestrator | 00:01:10.111 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 00:01:10.114198 | orchestrator | 00:01:10.111 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.114202 | orchestrator | 00:01:10.111 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.114206 | orchestrator | 00:01:10.112 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.114214 | orchestrator | 00:01:10.112 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.114217 | orchestrator | 00:01:10.112 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-25 00:01:10.114221 | orchestrator | 00:01:10.112 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.114225 | orchestrator | 00:01:10.112 STDOUT terraform:  + size = 20 2025-05-25 00:01:10.114229 | orchestrator | 00:01:10.112 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.114233 | orchestrator | 00:01:10.112 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.114237 | orchestrator | 00:01:10.112 STDOUT terraform:  } 2025-05-25 00:01:10.114241 | orchestrator | 00:01:10.112 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-25 00:01:10.114244 | orchestrator | 00:01:10.112 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 00:01:10.114248 | orchestrator | 00:01:10.112 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.114252 | orchestrator | 00:01:10.112 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.114256 | orchestrator | 00:01:10.112 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.114260 | orchestrator | 00:01:10.112 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.114264 | orchestrator | 00:01:10.112 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-25 00:01:10.114267 | orchestrator | 00:01:10.112 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.114271 | orchestrator | 00:01:10.112 STDOUT terraform:  + size = 20 2025-05-25 00:01:10.114278 | orchestrator | 00:01:10.112 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.114282 | orchestrator | 00:01:10.112 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.114286 | orchestrator | 00:01:10.112 STDOUT terraform:  } 2025-05-25 00:01:10.114290 | orchestrator | 00:01:10.112 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-25 00:01:10.114293 | orchestrator | 00:01:10.112 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 00:01:10.114297 | orchestrator | 00:01:10.113 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.114301 | orchestrator | 00:01:10.113 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.114305 | orchestrator | 00:01:10.113 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.114309 | orchestrator | 00:01:10.113 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.114312 | orchestrator | 00:01:10.113 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-25 00:01:10.114316 | orchestrator | 00:01:10.113 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.114320 | orchestrator | 00:01:10.113 STDOUT terraform:  + size = 20 2025-05-25 00:01:10.114329 | orchestrator | 00:01:10.113 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.114333 | orchestrator | 00:01:10.113 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.114340 | orchestrator | 00:01:10.113 STDOUT terraform:  } 2025-05-25 00:01:10.114344 | orchestrator | 00:01:10.113 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-25 00:01:10.114348 | orchestrator | 00:01:10.113 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 00:01:10.114352 | orchestrator | 00:01:10.113 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.114356 | orchestrator | 00:01:10.113 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.114359 | orchestrator | 00:01:10.113 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.114363 | orchestrator | 00:01:10.113 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.114367 | orchestrator | 00:01:10.113 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-25 00:01:10.114371 | orchestrator | 00:01:10.113 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.114375 | orchestrator | 00:01:10.113 STDOUT terraform:  + size = 20 2025-05-25 00:01:10.114378 | orchestrator | 00:01:10.113 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.114382 | orchestrator | 00:01:10.113 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.114386 | orchestrator | 00:01:10.113 STDOUT terraform:  } 2025-05-25 00:01:10.114390 | orchestrator | 00:01:10.113 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-25 00:01:10.114394 | orchestrator | 00:01:10.114 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 00:01:10.114397 | orchestrator | 00:01:10.114 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.114401 | orchestrator | 00:01:10.114 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.114405 | orchestrator | 00:01:10.114 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.114409 | orchestrator | 00:01:10.114 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.114415 | orchestrator | 00:01:10.114 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-25 00:01:10.114419 | orchestrator | 00:01:10.114 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.114460 | orchestrator | 00:01:10.114 STDOUT terraform:  + size = 20 2025-05-25 00:01:10.114498 | orchestrator | 00:01:10.114 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.114535 | orchestrator | 00:01:10.114 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.114542 | orchestrator | 00:01:10.114 STDOUT terraform:  } 2025-05-25 00:01:10.114768 | orchestrator | 00:01:10.114 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-25 00:01:10.114855 | orchestrator | 00:01:10.114 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 00:01:10.114882 | orchestrator | 00:01:10.114 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.114896 | orchestrator | 00:01:10.114 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.114907 | orchestrator | 00:01:10.114 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.114951 | orchestrator | 00:01:10.114 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.114968 | orchestrator | 00:01:10.114 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-25 00:01:10.115029 | orchestrator | 00:01:10.114 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.115046 | orchestrator | 00:01:10.115 STDOUT terraform:  + size = 20 2025-05-25 00:01:10.115062 | orchestrator | 00:01:10.115 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.115106 | orchestrator | 00:01:10.115 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.115122 | orchestrator | 00:01:10.115 STDOUT terraform:  } 2025-05-25 00:01:10.115195 | orchestrator | 00:01:10.115 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-25 00:01:10.115245 | orchestrator | 00:01:10.115 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-25 00:01:10.115290 | orchestrator | 00:01:10.115 STDOUT terraform:  + attachment = (known after apply) 2025-05-25 00:01:10.115307 | orchestrator | 00:01:10.115 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.115376 | orchestrator | 00:01:10.115 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.115422 | orchestrator | 00:01:10.115 STDOUT terraform:  + metadata = (known after apply) 2025-05-25 00:01:10.115466 | orchestrator | 00:01:10.115 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-25 00:01:10.115521 | orchestrator | 00:01:10.115 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.115537 | orchestrator | 00:01:10.115 STDOUT terraform:  + size = 20 2025-05-25 00:01:10.115561 | orchestrator | 00:01:10.115 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-25 00:01:10.115606 | orchestrator | 00:01:10.115 STDOUT terraform:  + volume_type = "ssd" 2025-05-25 00:01:10.115619 | orchestrator | 00:01:10.115 STDOUT terraform:  } 2025-05-25 00:01:10.115692 | orchestrator | 00:01:10.115 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-25 00:01:10.115748 | orchestrator | 00:01:10.115 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-25 00:01:10.115793 | orchestrator | 00:01:10.115 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 00:01:10.115838 | orchestrator | 00:01:10.115 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 00:01:10.115883 | orchestrator | 00:01:10.115 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 00:01:10.115927 | orchestrator | 00:01:10.115 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.115943 | orchestrator | 00:01:10.115 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.115986 | orchestrator | 00:01:10.115 STDOUT terraform:  + config_drive = true 2025-05-25 00:01:10.116031 | orchestrator | 00:01:10.115 STDOUT terraform:  + created = (known after apply) 2025-05-25 00:01:10.116077 | orchestrator | 00:01:10.116 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 00:01:10.116103 | orchestrator | 00:01:10.116 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-25 00:01:10.116161 | orchestrator | 00:01:10.116 STDOUT terraform:  + force_delete = false 2025-05-25 00:01:10.116178 | orchestrator | 00:01:10.116 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 00:01:10.116241 | orchestrator | 00:01:10.116 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.116300 | orchestrator | 00:01:10.116 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.116318 | orchestrator | 00:01:10.116 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 00:01:10.116365 | orchestrator | 00:01:10.116 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 00:01:10.116421 | orchestrator | 00:01:10.116 STDOUT terraform:  + name = "testbed-manager" 2025-05-25 00:01:10.116434 | orchestrator | 00:01:10.116 STDOUT terraform:  + power_state = "active" 2025-05-25 00:01:10.116489 | orchestrator | 00:01:10.116 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.116535 | orchestrator | 00:01:10.116 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 00:01:10.116551 | orchestrator | 00:01:10.116 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 00:01:10.116611 | orchestrator | 00:01:10.116 STDOUT terraform:  + updated = (known after apply) 2025-05-25 00:01:10.116673 | orchestrator | 00:01:10.116 STDOUT terraform:  + user_data = (known after apply) 2025-05-25 00:01:10.116687 | orchestrator | 00:01:10.116 STDOUT terraform:  + block_device { 2025-05-25 00:01:10.116702 | orchestrator | 00:01:10.116 STDOUT terraform:  + boot_index = 0 2025-05-25 00:01:10.116758 | orchestrator | 00:01:10.116 STDOUT terraform:  + delete_on_termination = false 2025-05-25 00:01:10.116776 | orchestrator | 00:01:10.116 STDOUT terraform:  + destination_type = "volume" 2025-05-25 00:01:10.116836 | orchestrator | 00:01:10.116 STDOUT terraform:  + multiattach = false 2025-05-25 00:01:10.116853 | orchestrator | 00:01:10.116 STDOUT terraform:  + source_type = "volume" 2025-05-25 00:01:10.116928 | orchestrator | 00:01:10.116 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.116942 | orchestrator | 00:01:10.116 STDOUT terraform:  } 2025-05-25 00:01:10.116957 | orchestrator | 00:01:10.116 STDOUT terraform:  + network { 2025-05-25 00:01:10.116972 | orchestrator | 00:01:10.116 STDOUT terraform:  + access_network = false 2025-05-25 00:01:10.117025 | orchestrator | 00:01:10.116 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 00:01:10.117081 | orchestrator | 00:01:10.117 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 00:01:10.117098 | orchestrator | 00:01:10.117 STDOUT terraform:  + mac = (known after apply) 2025-05-25 00:01:10.117154 | orchestrator | 00:01:10.117 STDOUT terraform:  + name = (known after apply) 2025-05-25 00:01:10.117172 | orchestrator | 00:01:10.117 STDOUT terraform:  + port = (known after apply) 2025-05-25 00:01:10.117239 | orchestrator | 00:01:10.117 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.117260 | orchestrator | 00:01:10.117 STDOUT terraform:  } 2025-05-25 00:01:10.117276 | orchestrator | 00:01:10.117 STDOUT terraform:  } 2025-05-25 00:01:10.117330 | orchestrator | 00:01:10.117 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-25 00:01:10.117390 | orchestrator | 00:01:10.117 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-25 00:01:10.117436 | orchestrator | 00:01:10.117 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 00:01:10.117481 | orchestrator | 00:01:10.117 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 00:01:10.117525 | orchestrator | 00:01:10.117 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 00:01:10.117570 | orchestrator | 00:01:10.117 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.117586 | orchestrator | 00:01:10.117 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.117666 | orchestrator | 00:01:10.117 STDOUT terraform:  + config_drive = true 2025-05-25 00:01:10.117685 | orchestrator | 00:01:10.117 STDOUT terraform:  + created = (known after apply) 2025-05-25 00:01:10.117730 | orchestrator | 00:01:10.117 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 00:01:10.117786 | orchestrator | 00:01:10.117 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-25 00:01:10.117799 | orchestrator | 00:01:10.117 STDOUT terraform:  + force_delete = false 2025-05-25 00:01:10.117843 | orchestrator | 00:01:10.117 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 00:01:10.117897 | orchestrator | 00:01:10.117 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.117942 | orchestrator | 00:01:10.117 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.117987 | orchestrator | 00:01:10.117 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 00:01:10.118003 | orchestrator | 00:01:10.117 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 00:01:10.118078 | orchestrator | 00:01:10.117 STDOUT terraform:  + name = "testbed-node-0" 2025-05-25 00:01:10.118098 | orchestrator | 00:01:10.118 STDOUT terraform:  + power_state = "active" 2025-05-25 00:01:10.118227 | orchestrator | 00:01:10.118 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.118244 | orchestrator | 00:01:10.118 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 00:01:10.118252 | orchestrator | 00:01:10.118 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 00:01:10.118275 | orchestrator | 00:01:10.118 STDOUT terraform:  + updated = (known after apply) 2025-05-25 00:01:10.118345 | orchestrator | 00:01:10.118 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-25 00:01:10.118366 | orchestrator | 00:01:10.118 STDOUT terraform:  + block_device { 2025-05-25 00:01:10.118398 | orchestrator | 00:01:10.118 STDOUT terraform:  + boot_index = 0 2025-05-25 00:01:10.118440 | orchestrator | 00:01:10.118 STDOUT terraform:  + delete_on_termination = false 2025-05-25 00:01:10.118482 | orchestrator | 00:01:10.118 STDOUT terraform:  + destination_type = "volume" 2025-05-25 00:01:10.118523 | orchestrator | 00:01:10.118 STDOUT terraform:  + multiattach = false 2025-05-25 00:01:10.118565 | orchestrator | 00:01:10.118 STDOUT terraform:  + source_type = "volume" 2025-05-25 00:01:10.118620 | orchestrator | 00:01:10.118 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.118627 | orchestrator | 00:01:10.118 STDOUT terraform:  } 2025-05-25 00:01:10.118681 | orchestrator | 00:01:10.118 STDOUT terraform:  + network { 2025-05-25 00:01:10.118713 | orchestrator | 00:01:10.118 STDOUT terraform:  + access_network = false 2025-05-25 00:01:10.118756 | orchestrator | 00:01:10.118 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 00:01:10.118800 | orchestrator | 00:01:10.118 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 00:01:10.118844 | orchestrator | 00:01:10.118 STDOUT terraform:  + mac = (known after apply) 2025-05-25 00:01:10.118888 | orchestrator | 00:01:10.118 STDOUT terraform:  + name = (known after apply) 2025-05-25 00:01:10.118933 | orchestrator | 00:01:10.118 STDOUT terraform:  + port = (known after apply) 2025-05-25 00:01:10.118978 | orchestrator | 00:01:10.118 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.118984 | orchestrator | 00:01:10.118 STDOUT terraform:  } 2025-05-25 00:01:10.119011 | orchestrator | 00:01:10.118 STDOUT terraform:  } 2025-05-25 00:01:10.119070 | orchestrator | 00:01:10.119 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-25 00:01:10.119123 | orchestrator | 00:01:10.119 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-25 00:01:10.119166 | orchestrator | 00:01:10.119 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 00:01:10.119211 | orchestrator | 00:01:10.119 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 00:01:10.119254 | orchestrator | 00:01:10.119 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 00:01:10.119298 | orchestrator | 00:01:10.119 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.119329 | orchestrator | 00:01:10.119 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.119355 | orchestrator | 00:01:10.119 STDOUT terraform:  + config_drive = true 2025-05-25 00:01:10.119399 | orchestrator | 00:01:10.119 STDOUT terraform:  + created = (known after apply) 2025-05-25 00:01:10.119443 | orchestrator | 00:01:10.119 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 00:01:10.119480 | orchestrator | 00:01:10.119 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-25 00:01:10.119511 | orchestrator | 00:01:10.119 STDOUT terraform:  + force_delete = false 2025-05-25 00:01:10.119554 | orchestrator | 00:01:10.119 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 00:01:10.119599 | orchestrator | 00:01:10.119 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.119658 | orchestrator | 00:01:10.119 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.119700 | orchestrator | 00:01:10.119 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 00:01:10.119732 | orchestrator | 00:01:10.119 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 00:01:10.119772 | orchestrator | 00:01:10.119 STDOUT terraform:  + name = "testbed-node-1" 2025-05-25 00:01:10.119804 | orchestrator | 00:01:10.119 STDOUT terraform:  + power_state = "active" 2025-05-25 00:01:10.119848 | orchestrator | 00:01:10.119 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.119891 | orchestrator | 00:01:10.119 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 00:01:10.119920 | orchestrator | 00:01:10.119 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 00:01:10.119965 | orchestrator | 00:01:10.119 STDOUT terraform:  + updated = (known after apply) 2025-05-25 00:01:10.120029 | orchestrator | 00:01:10.119 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-25 00:01:10.120049 | orchestrator | 00:01:10.120 STDOUT terraform:  + block_device { 2025-05-25 00:01:10.120077 | orchestrator | 00:01:10.120 STDOUT terraform:  + boot_index = 0 2025-05-25 00:01:10.120112 | orchestrator | 00:01:10.120 STDOUT terraform:  + delete_on_termination = false 2025-05-25 00:01:10.120149 | orchestrator | 00:01:10.120 STDOUT terraform:  + destination_type = "volume" 2025-05-25 00:01:10.120186 | orchestrator | 00:01:10.120 STDOUT terraform:  + multiattach = false 2025-05-25 00:01:10.120226 | orchestrator | 00:01:10.120 STDOUT terraform:  + source_type = "volume" 2025-05-25 00:01:10.120272 | orchestrator | 00:01:10.120 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.120278 | orchestrator | 00:01:10.120 STDOUT terraform:  } 2025-05-25 00:01:10.120303 | orchestrator | 00:01:10.120 STDOUT terraform:  + network { 2025-05-25 00:01:10.120331 | orchestrator | 00:01:10.120 STDOUT terraform:  + access_network = false 2025-05-25 00:01:10.120369 | orchestrator | 00:01:10.120 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 00:01:10.120408 | orchestrator | 00:01:10.120 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 00:01:10.120448 | orchestrator | 00:01:10.120 STDOUT terraform:  + mac = (known after apply) 2025-05-25 00:01:10.120488 | orchestrator | 00:01:10.120 STDOUT terraform:  + name = (known after apply) 2025-05-25 00:01:10.120528 | orchestrator | 00:01:10.120 STDOUT terraform:  + port = (known after apply) 2025-05-25 00:01:10.120568 | orchestrator | 00:01:10.120 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.120574 | orchestrator | 00:01:10.120 STDOUT terraform:  } 2025-05-25 00:01:10.120593 | orchestrator | 00:01:10.120 STDOUT terraform:  } 2025-05-25 00:01:10.120659 | orchestrator | 00:01:10.120 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-25 00:01:10.120712 | orchestrator | 00:01:10.120 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-25 00:01:10.120757 | orchestrator | 00:01:10.120 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 00:01:10.120800 | orchestrator | 00:01:10.120 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 00:01:10.120843 | orchestrator | 00:01:10.120 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 00:01:10.120888 | orchestrator | 00:01:10.120 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.120917 | orchestrator | 00:01:10.120 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.120945 | orchestrator | 00:01:10.120 STDOUT terraform:  + config_drive = true 2025-05-25 00:01:10.120990 | orchestrator | 00:01:10.120 STDOUT terraform:  + created = (known after apply) 2025-05-25 00:01:10.121033 | orchestrator | 00:01:10.120 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 00:01:10.121071 | orchestrator | 00:01:10.121 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-25 00:01:10.121101 | orchestrator | 00:01:10.121 STDOUT terraform:  + force_delete = false 2025-05-25 00:01:10.121144 | orchestrator | 00:01:10.121 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 00:01:10.121189 | orchestrator | 00:01:10.121 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.121233 | orchestrator | 00:01:10.121 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.121278 | orchestrator | 00:01:10.121 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 00:01:10.121311 | orchestrator | 00:01:10.121 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 00:01:10.121349 | orchestrator | 00:01:10.121 STDOUT terraform:  + name = "testbed-node-2" 2025-05-25 00:01:10.121380 | orchestrator | 00:01:10.121 STDOUT terraform:  + power_state = "active" 2025-05-25 00:01:10.121425 | orchestrator | 00:01:10.121 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.121469 | orchestrator | 00:01:10.121 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 00:01:10.121499 | orchestrator | 00:01:10.121 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 00:01:10.121543 | orchestrator | 00:01:10.121 STDOUT terraform:  + updated = (known after apply) 2025-05-25 00:01:10.121605 | orchestrator | 00:01:10.121 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-25 00:01:10.121625 | orchestrator | 00:01:10.121 STDOUT terraform:  + block_device { 2025-05-25 00:01:10.121663 | orchestrator | 00:01:10.121 STDOUT terraform:  + boot_index = 0 2025-05-25 00:01:10.121785 | orchestrator | 00:01:10.121 STDOUT terraform:  + delete_on_termination = false 2025-05-25 00:01:10.121821 | orchestrator | 00:01:10.121 STDOUT terraform:  + destination_type = "volume" 2025-05-25 00:01:10.121835 | orchestrator | 00:01:10.121 STDOUT terraform:  + multiattach = false 2025-05-25 00:01:10.121852 | orchestrator | 00:01:10.121 STDOUT terraform:  + source_type = "volume" 2025-05-25 00:01:10.121864 | orchestrator | 00:01:10.121 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.121874 | orchestrator | 00:01:10.121 STDOUT terraform:  } 2025-05-25 00:01:10.121889 | orchestrator | 00:01:10.121 STDOUT terraform:  + network { 2025-05-25 00:01:10.121899 | orchestrator | 00:01:10.121 STDOUT terraform:  + access_network = false 2025-05-25 00:01:10.121923 | orchestrator | 00:01:10.121 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 00:01:10.121969 | orchestrator | 00:01:10.121 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 00:01:10.121986 | orchestrator | 00:01:10.121 STDOUT terraform:  + mac = (known after apply) 2025-05-25 00:01:10.122052 | orchestrator | 00:01:10.121 STDOUT terraform:  + name = (known after apply) 2025-05-25 00:01:10.122112 | orchestrator | 00:01:10.122 STDOUT terraform:  + port = (known after apply) 2025-05-25 00:01:10.122129 | orchestrator | 00:01:10.122 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.122140 | orchestrator | 00:01:10.122 STDOUT terraform:  } 2025-05-25 00:01:10.122154 | orchestrator | 00:01:10.122 STDOUT terraform:  } 2025-05-25 00:01:10.122225 | orchestrator | 00:01:10.122 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-25 00:01:10.122275 | orchestrator | 00:01:10.122 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-25 00:01:10.122292 | orchestrator | 00:01:10.122 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 00:01:10.122356 | orchestrator | 00:01:10.122 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 00:01:10.122397 | orchestrator | 00:01:10.122 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 00:01:10.122444 | orchestrator | 00:01:10.122 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.122460 | orchestrator | 00:01:10.122 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.122474 | orchestrator | 00:01:10.122 STDOUT terraform:  + config_drive = true 2025-05-25 00:01:10.122535 | orchestrator | 00:01:10.122 STDOUT terraform:  + created = (known after apply) 2025-05-25 00:01:10.122591 | orchestrator | 00:01:10.122 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 00:01:10.122608 | orchestrator | 00:01:10.122 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-25 00:01:10.122622 | orchestrator | 00:01:10.122 STDOUT terraform:  + force_delete = false 2025-05-25 00:01:10.122713 | orchestrator | 00:01:10.122 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 00:01:10.122731 | orchestrator | 00:01:10.122 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.122799 | orchestrator | 00:01:10.122 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.122856 | orchestrator | 00:01:10.122 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 00:01:10.122869 | orchestrator | 00:01:10.122 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 00:01:10.122884 | orchestrator | 00:01:10.122 STDOUT terraform:  + name = "testbed-node-3" 2025-05-25 00:01:10.122929 | orchestrator | 00:01:10.122 STDOUT terraform:  + power_state = "active" 2025-05-25 00:01:10.122974 | orchestrator | 00:01:10.122 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.122990 | orchestrator | 00:01:10.122 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 00:01:10.123034 | orchestrator | 00:01:10.122 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 00:01:10.123064 | orchestrator | 00:01:10.123 STDOUT terraform:  + updated = (known after apply) 2025-05-25 00:01:10.123142 | orchestrator | 00:01:10.123 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-25 00:01:10.123156 | orchestrator | 00:01:10.123 STDOUT terraform:  + block_device { 2025-05-25 00:01:10.123171 | orchestrator | 00:01:10.123 STDOUT terraform:  + boot_index = 0 2025-05-25 00:01:10.123227 | orchestrator | 00:01:10.123 STDOUT terraform:  + delete_on_termination = false 2025-05-25 00:01:10.123240 | orchestrator | 00:01:10.123 STDOUT terraform:  + destination_type = "volume" 2025-05-25 00:01:10.123255 | orchestrator | 00:01:10.123 STDOUT terraform:  + multiattach = false 2025-05-25 00:01:10.123310 | orchestrator | 00:01:10.123 STDOUT terraform:  + source_type = "volume" 2025-05-25 00:01:10.123357 | orchestrator | 00:01:10.123 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.123370 | orchestrator | 00:01:10.123 STDOUT terraform:  } 2025-05-25 00:01:10.123384 | orchestrator | 00:01:10.123 STDOUT terraform:  + network { 2025-05-25 00:01:10.123396 | orchestrator | 00:01:10.123 STDOUT terraform:  + access_network = false 2025-05-25 00:01:10.123423 | orchestrator | 00:01:10.123 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 00:01:10.123468 | orchestrator | 00:01:10.123 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 00:01:10.123483 | orchestrator | 00:01:10.123 STDOUT terraform:  + mac = (known after apply) 2025-05-25 00:01:10.123543 | orchestrator | 00:01:10.123 STDOUT terraform:  + name = (known after apply) 2025-05-25 00:01:10.123560 | orchestrator | 00:01:10.123 STDOUT terraform:  + port = (known after apply) 2025-05-25 00:01:10.123616 | orchestrator | 00:01:10.123 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.123629 | orchestrator | 00:01:10.123 STDOUT terraform:  } 2025-05-25 00:01:10.123659 | orchestrator | 00:01:10.123 STDOUT terraform:  } 2025-05-25 00:01:10.123704 | orchestrator | 00:01:10.123 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-25 00:01:10.123750 | orchestrator | 00:01:10.123 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-25 00:01:10.123766 | orchestrator | 00:01:10.123 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 00:01:10.123827 | orchestrator | 00:01:10.123 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 00:01:10.123843 | orchestrator | 00:01:10.123 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 00:01:10.123907 | orchestrator | 00:01:10.123 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.123924 | orchestrator | 00:01:10.123 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.123938 | orchestrator | 00:01:10.123 STDOUT terraform:  + config_drive = true 2025-05-25 00:01:10.123995 | orchestrator | 00:01:10.123 STDOUT terraform:  + created = (known after apply) 2025-05-25 00:01:10.124012 | orchestrator | 00:01:10.123 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 00:01:10.124065 | orchestrator | 00:01:10.124 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-25 00:01:10.124082 | orchestrator | 00:01:10.124 STDOUT terraform:  + force_delete = false 2025-05-25 00:01:10.124125 | orchestrator | 00:01:10.124 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 00:01:10.124141 | orchestrator | 00:01:10.124 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.124207 | orchestrator | 00:01:10.124 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.124224 | orchestrator | 00:01:10.124 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 00:01:10.124268 | orchestrator | 00:01:10.124 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 00:01:10.124284 | orchestrator | 00:01:10.124 STDOUT terraform:  + name = "testbed-node-4" 2025-05-25 00:01:10.124329 | orchestrator | 00:01:10.124 STDOUT terraform:  + power_state = "active" 2025-05-25 00:01:10.124344 | orchestrator | 00:01:10.124 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.124408 | orchestrator | 00:01:10.124 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 00:01:10.124425 | orchestrator | 00:01:10.124 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 00:01:10.124469 | orchestrator | 00:01:10.124 STDOUT terraform:  + updated = (known after apply) 2025-05-25 00:01:10.124528 | orchestrator | 00:01:10.124 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-25 00:01:10.124541 | orchestrator | 00:01:10.124 STDOUT terraform:  + block_device { 2025-05-25 00:01:10.124556 | orchestrator | 00:01:10.124 STDOUT terraform:  + boot_index = 0 2025-05-25 00:01:10.124600 | orchestrator | 00:01:10.124 STDOUT terraform:  + delete_on_termination = false 2025-05-25 00:01:10.124616 | orchestrator | 00:01:10.124 STDOUT terraform:  + destination_type = "volume" 2025-05-25 00:01:10.124693 | orchestrator | 00:01:10.124 STDOUT terraform:  + multiattach = false 2025-05-25 00:01:10.124711 | orchestrator | 00:01:10.124 STDOUT terraform:  + source_type = "volume" 2025-05-25 00:01:10.124780 | orchestrator | 00:01:10.124 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.124797 | orchestrator | 00:01:10.124 STDOUT terraform:  } 2025-05-25 00:01:10.124811 | orchestrator | 00:01:10.124 STDOUT terraform:  + network { 2025-05-25 00:01:10.124865 | orchestrator | 00:01:10.124 STDOUT terraform:  + access_network = false 2025-05-25 00:01:10.124927 | orchestrator | 00:01:10.124 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 00:01:10.124985 | orchestrator | 00:01:10.124 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 00:01:10.125012 | orchestrator | 00:01:10.124 STDOUT terraform:  + mac = (known after apply) 2025-05-25 00:01:10.125088 | orchestrator | 00:01:10.125 STDOUT terraform:  + name = (known after apply) 2025-05-25 00:01:10.125144 | orchestrator | 00:01:10.125 STDOUT terraform:  + port = (known after apply) 2025-05-25 00:01:10.125202 | orchestrator | 00:01:10.125 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.125228 | orchestrator | 00:01:10.125 STDOUT terraform:  } 2025-05-25 00:01:10.125240 | orchestrator | 00:01:10.125 STDOUT terraform:  } 2025-05-25 00:01:10.125331 | orchestrator | 00:01:10.125 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-25 00:01:10.125386 | orchestrator | 00:01:10.125 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-25 00:01:10.125442 | orchestrator | 00:01:10.125 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-25 00:01:10.125459 | orchestrator | 00:01:10.125 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-25 00:01:10.125514 | orchestrator | 00:01:10.125 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-25 00:01:10.125532 | orchestrator | 00:01:10.125 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.125576 | orchestrator | 00:01:10.125 STDOUT terraform:  + availability_zone = "nova" 2025-05-25 00:01:10.125592 | orchestrator | 00:01:10.125 STDOUT terraform:  + config_drive = true 2025-05-25 00:01:10.125634 | orchestrator | 00:01:10.125 STDOUT terraform:  + created = (known after apply) 2025-05-25 00:01:10.125721 | orchestrator | 00:01:10.125 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-25 00:01:10.125739 | orchestrator | 00:01:10.125 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-25 00:01:10.125754 | orchestrator | 00:01:10.125 STDOUT terraform:  + force_delete = false 2025-05-25 00:01:10.125873 | orchestrator | 00:01:10.125 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-25 00:01:10.125891 | orchestrator | 00:01:10.125 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.125904 | orchestrator | 00:01:10.125 STDOUT terraform:  + image_id = (known after apply) 2025-05-25 00:01:10.125918 | orchestrator | 00:01:10.125 STDOUT terraform:  + image_name = (known after apply) 2025-05-25 00:01:10.125954 | orchestrator | 00:01:10.125 STDOUT terraform:  + key_pair = "testbed" 2025-05-25 00:01:10.125996 | orchestrator | 00:01:10.125 STDOUT terraform:  + name = "testbed-node-5" 2025-05-25 00:01:10.126032 | orchestrator | 00:01:10.125 STDOUT terraform:  + power_state = "active" 2025-05-25 00:01:10.126086 | orchestrator | 00:01:10.126 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.126128 | orchestrator | 00:01:10.126 STDOUT terraform:  + security_groups = (known after apply) 2025-05-25 00:01:10.126157 | orchestrator | 00:01:10.126 STDOUT terraform:  + stop_before_destroy = false 2025-05-25 00:01:10.126199 | orchestrator | 00:01:10.126 STDOUT terraform:  + updated = (known after apply) 2025-05-25 00:01:10.126257 | orchestrator | 00:01:10.126 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-25 00:01:10.126264 | orchestrator | 00:01:10.126 STDOUT terraform:  + block_device { 2025-05-25 00:01:10.126298 | orchestrator | 00:01:10.126 STDOUT terraform:  + boot_index = 0 2025-05-25 00:01:10.126332 | orchestrator | 00:01:10.126 STDOUT terraform:  + delete_on_termination = false 2025-05-25 00:01:10.126364 | orchestrator | 00:01:10.126 STDOUT terraform:  + destination_type = "volume" 2025-05-25 00:01:10.126396 | orchestrator | 00:01:10.126 STDOUT terraform:  + multiattach = false 2025-05-25 00:01:10.126429 | orchestrator | 00:01:10.126 STDOUT terraform:  + source_type = "volume" 2025-05-25 00:01:10.126472 | orchestrator | 00:01:10.126 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.126478 | orchestrator | 00:01:10.126 STDOUT terraform:  } 2025-05-25 00:01:10.126497 | orchestrator | 00:01:10.126 STDOUT terraform:  + network { 2025-05-25 00:01:10.126521 | orchestrator | 00:01:10.126 STDOUT terraform:  + access_network = false 2025-05-25 00:01:10.126555 | orchestrator | 00:01:10.126 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-25 00:01:10.126589 | orchestrator | 00:01:10.126 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-25 00:01:10.126623 | orchestrator | 00:01:10.126 STDOUT terraform:  + mac = (known after apply) 2025-05-25 00:01:10.126671 | orchestrator | 00:01:10.126 STDOUT terraform:  + name = (known after apply) 2025-05-25 00:01:10.126706 | orchestrator | 00:01:10.126 STDOUT terraform:  + port = (known after apply) 2025-05-25 00:01:10.126740 | orchestrator | 00:01:10.126 STDOUT terraform:  + uuid = (known after apply) 2025-05-25 00:01:10.126746 | orchestrator | 00:01:10.126 STDOUT terraform:  } 2025-05-25 00:01:10.126771 | orchestrator | 00:01:10.126 STDOUT terraform:  } 2025-05-25 00:01:10.126804 | orchestrator | 00:01:10.126 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-25 00:01:10.126842 | orchestrator | 00:01:10.126 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-25 00:01:10.126874 | orchestrator | 00:01:10.126 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-25 00:01:10.126906 | orchestrator | 00:01:10.126 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.126926 | orchestrator | 00:01:10.126 STDOUT terraform:  + name = "testbed" 2025-05-25 00:01:10.126953 | orchestrator | 00:01:10.126 STDOUT terraform:  + private_key = (sensitive value) 2025-05-25 00:01:10.126983 | orchestrator | 00:01:10.126 STDOUT terraform:  + public_key = (known after apply) 2025-05-25 00:01:10.127015 | orchestrator | 00:01:10.126 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.127045 | orchestrator | 00:01:10.127 STDOUT terraform:  + user_id = (known after apply) 2025-05-25 00:01:10.127051 | orchestrator | 00:01:10.127 STDOUT terraform:  } 2025-05-25 00:01:10.127112 | orchestrator | 00:01:10.127 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-25 00:01:10.127165 | orchestrator | 00:01:10.127 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 00:01:10.127205 | orchestrator | 00:01:10.127 STDOUT terraform:  + device = (known after apply) 2025-05-25 00:01:10.127252 | orchestrator | 00:01:10.127 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.127303 | orchestrator | 00:01:10.127 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 00:01:10.127337 | orchestrator | 00:01:10.127 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.127369 | orchestrator | 00:01:10.127 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 00:01:10.127379 | orchestrator | 00:01:10.127 STDOUT terraform:  } 2025-05-25 00:01:10.127443 | orchestrator | 00:01:10.127 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-25 00:01:10.127522 | orchestrator | 00:01:10.127 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 00:01:10.127553 | orchestrator | 00:01:10.127 STDOUT terraform:  + device = (known after apply) 2025-05-25 00:01:10.127585 | orchestrator | 00:01:10.127 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.127617 | orchestrator | 00:01:10.127 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 00:01:10.127658 | orchestrator | 00:01:10.127 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.127707 | orchestrator | 00:01:10.127 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 00:01:10.127713 | orchestrator | 00:01:10.127 STDOUT terraform:  } 2025-05-25 00:01:10.127773 | orchestrator | 00:01:10.127 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-25 00:01:10.127826 | orchestrator | 00:01:10.127 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 00:01:10.127858 | orchestrator | 00:01:10.127 STDOUT terraform:  + device = (known after apply) 2025-05-25 00:01:10.127890 | orchestrator | 00:01:10.127 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.127928 | orchestrator | 00:01:10.127 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 00:01:10.127967 | orchestrator | 00:01:10.127 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.127998 | orchestrator | 00:01:10.127 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 00:01:10.128004 | orchestrator | 00:01:10.127 STDOUT terraform:  } 2025-05-25 00:01:10.128062 | orchestrator | 00:01:10.127 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-25 00:01:10.128116 | orchestrator | 00:01:10.128 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 00:01:10.128147 | orchestrator | 00:01:10.128 STDOUT terraform:  + device = (known after apply) 2025-05-25 00:01:10.128180 | orchestrator | 00:01:10.128 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.128211 | orchestrator | 00:01:10.128 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 00:01:10.128249 | orchestrator | 00:01:10.128 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.128256 | orchestrator | 00:01:10.128 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 00:01:10.128275 | orchestrator | 00:01:10.128 STDOUT terraform:  } 2025-05-25 00:01:10.128324 | orchestrator | 00:01:10.128 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-25 00:01:10.128373 | orchestrator | 00:01:10.128 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 00:01:10.128400 | orchestrator | 00:01:10.128 STDOUT terraform:  + device = (known after apply) 2025-05-25 00:01:10.128431 | orchestrator | 00:01:10.128 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.128455 | orchestrator | 00:01:10.128 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 00:01:10.128484 | orchestrator | 00:01:10.128 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.128516 | orchestrator | 00:01:10.128 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 00:01:10.128522 | orchestrator | 00:01:10.128 STDOUT terraform:  } 2025-05-25 00:01:10.128574 | orchestrator | 00:01:10.128 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-25 00:01:10.128623 | orchestrator | 00:01:10.128 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 00:01:10.128667 | orchestrator | 00:01:10.128 STDOUT terraform:  + device = (known after apply) 2025-05-25 00:01:10.128687 | orchestrator | 00:01:10.128 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.128809 | orchestrator | 00:01:10.128 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 00:01:10.128846 | orchestrator | 00:01:10.128 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.128856 | orchestrator | 00:01:10.128 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 00:01:10.128867 | orchestrator | 00:01:10.128 STDOUT terraform:  } 2025-05-25 00:01:10.128883 | orchestrator | 00:01:10.128 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-25 00:01:10.128894 | orchestrator | 00:01:10.128 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 00:01:10.128907 | orchestrator | 00:01:10.128 STDOUT terraform:  + device = (known after apply) 2025-05-25 00:01:10.128920 | orchestrator | 00:01:10.128 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.128973 | orchestrator | 00:01:10.128 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 00:01:10.128985 | orchestrator | 00:01:10.128 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.128998 | orchestrator | 00:01:10.128 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 00:01:10.129011 | orchestrator | 00:01:10.128 STDOUT terraform:  } 2025-05-25 00:01:10.129071 | orchestrator | 00:01:10.128 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-25 00:01:10.129113 | orchestrator | 00:01:10.129 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 00:01:10.129128 | orchestrator | 00:01:10.129 STDOUT terraform:  + device = (known after apply) 2025-05-25 00:01:10.129166 | orchestrator | 00:01:10.129 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.129181 | orchestrator | 00:01:10.129 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 00:01:10.129219 | orchestrator | 00:01:10.129 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.129234 | orchestrator | 00:01:10.129 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 00:01:10.129244 | orchestrator | 00:01:10.129 STDOUT terraform:  } 2025-05-25 00:01:10.129331 | orchestrator | 00:01:10.129 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-25 00:01:10.129361 | orchestrator | 00:01:10.129 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-25 00:01:10.129386 | orchestrator | 00:01:10.129 STDOUT terraform:  + device = (known after apply) 2025-05-25 00:01:10.129432 | orchestrator | 00:01:10.129 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.129445 | orchestrator | 00:01:10.129 STDOUT terraform:  + instance_id = (known after apply) 2025-05-25 00:01:10.129459 | orchestrator | 00:01:10.129 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.129502 | orchestrator | 00:01:10.129 STDOUT terraform:  + volume_id = (known after apply) 2025-05-25 00:01:10.129515 | orchestrator | 00:01:10.129 STDOUT terraform:  } 2025-05-25 00:01:10.129588 | orchestrator | 00:01:10.129 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-25 00:01:10.129680 | orchestrator | 00:01:10.129 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-25 00:01:10.129695 | orchestrator | 00:01:10.129 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-25 00:01:10.129710 | orchestrator | 00:01:10.129 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-25 00:01:10.129724 | orchestrator | 00:01:10.129 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.129779 | orchestrator | 00:01:10.129 STDOUT terraform:  + port_id = (known after apply) 2025-05-25 00:01:10.129792 | orchestrator | 00:01:10.129 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.129806 | orchestrator | 00:01:10.129 STDOUT terraform:  } 2025-05-25 00:01:10.129862 | orchestrator | 00:01:10.129 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-25 00:01:10.129879 | orchestrator | 00:01:10.129 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-25 00:01:10.129895 | orchestrator | 00:01:10.129 STDOUT terraform:  + address = (known after apply) 2025-05-25 00:01:10.129952 | orchestrator | 00:01:10.129 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.129969 | orchestrator | 00:01:10.129 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-25 00:01:10.130039 | orchestrator | 00:01:10.129 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 00:01:10.130059 | orchestrator | 00:01:10.129 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-25 00:01:10.130074 | orchestrator | 00:01:10.130 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.130119 | orchestrator | 00:01:10.130 STDOUT terraform:  + pool = "public" 2025-05-25 00:01:10.130136 | orchestrator | 00:01:10.130 STDOUT terraform:  + port_id = (known after apply) 2025-05-25 00:01:10.130151 | orchestrator | 00:01:10.130 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.130173 | orchestrator | 00:01:10.130 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 00:01:10.130189 | orchestrator | 00:01:10.130 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.130203 | orchestrator | 00:01:10.130 STDOUT terraform:  } 2025-05-25 00:01:10.130263 | orchestrator | 00:01:10.130 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-25 00:01:10.130288 | orchestrator | 00:01:10.130 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-25 00:01:10.130335 | orchestrator | 00:01:10.130 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 00:01:10.130351 | orchestrator | 00:01:10.130 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.130407 | orchestrator | 00:01:10.130 STDOUT terraform:  + availability_zone_hints = [ 2025-05-25 00:01:10.130420 | orchestrator | 00:01:10.130 STDOUT terraform:  + "nova", 2025-05-25 00:01:10.130432 | orchestrator | 00:01:10.130 STDOUT terraform:  ] 2025-05-25 00:01:10.130447 | orchestrator | 00:01:10.130 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-25 00:01:10.130504 | orchestrator | 00:01:10.130 STDOUT terraform:  + external = (known after apply) 2025-05-25 00:01:10.130551 | orchestrator | 00:01:10.130 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.130568 | orchestrator | 00:01:10.130 STDOUT terraform:  + mtu = (known after apply) 2025-05-25 00:01:10.130660 | orchestrator | 00:01:10.130 STDOUT terraform:  + name = "net-testbed-management" 2025-05-25 00:01:10.130718 | orchestrator | 00:01:10.130 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 00:01:10.130734 | orchestrator | 00:01:10.130 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 00:01:10.130778 | orchestrator | 00:01:10.130 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.130795 | orchestrator | 00:01:10.130 STDOUT terraform:  + shared = (known after apply) 2025-05-25 00:01:10.130850 | orchestrator | 00:01:10.130 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.130866 | orchestrator | 00:01:10.130 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-25 00:01:10.130886 | orchestrator | 00:01:10.130 STDOUT terraform:  + segments (known after apply) 2025-05-25 00:01:10.130901 | orchestrator | 00:01:10.130 STDOUT terraform:  } 2025-05-25 00:01:10.130959 | orchestrator | 00:01:10.130 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-25 00:01:10.131016 | orchestrator | 00:01:10.130 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-25 00:01:10.131033 | orchestrator | 00:01:10.130 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 00:01:10.131077 | orchestrator | 00:01:10.131 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 00:01:10.131093 | orchestrator | 00:01:10.131 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 00:01:10.131148 | orchestrator | 00:01:10.131 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.131165 | orchestrator | 00:01:10.131 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 00:01:10.131209 | orchestrator | 00:01:10.131 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 00:01:10.131225 | orchestrator | 00:01:10.131 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 00:01:10.131282 | orchestrator | 00:01:10.131 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 00:01:10.131306 | orchestrator | 00:01:10.131 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.131351 | orchestrator | 00:01:10.131 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 00:01:10.131368 | orchestrator | 00:01:10.131 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 00:01:10.131412 | orchestrator | 00:01:10.131 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 00:01:10.131428 | orchestrator | 00:01:10.131 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 00:01:10.131485 | orchestrator | 00:01:10.131 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.131502 | orchestrator | 00:01:10.131 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 00:01:10.131546 | orchestrator | 00:01:10.131 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.131559 | orchestrator | 00:01:10.131 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.131574 | orchestrator | 00:01:10.131 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-25 00:01:10.131589 | orchestrator | 00:01:10.131 STDOUT terraform:  } 2025-05-25 00:01:10.131603 | orchestrator | 00:01:10.131 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.131674 | orchestrator | 00:01:10.131 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 00:01:10.131688 | orchestrator | 00:01:10.131 STDOUT terraform:  } 2025-05-25 00:01:10.131699 | orchestrator | 00:01:10.131 STDOUT terraform:  + binding (known after apply) 2025-05-25 00:01:10.131714 | orchestrator | 00:01:10.131 STDOUT terraform:  + fixed_ip { 2025-05-25 00:01:10.131725 | orchestrator | 00:01:10.131 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-25 00:01:10.131740 | orchestrator | 00:01:10.131 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 00:01:10.131751 | orchestrator | 00:01:10.131 STDOUT terraform:  } 2025-05-25 00:01:10.131763 | orchestrator | 00:01:10.131 STDOUT terraform:  } 2025-05-25 00:01:10.131808 | orchestrator | 00:01:10.131 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-25 00:01:10.131825 | orchestrator | 00:01:10.131 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-25 00:01:10.131883 | orchestrator | 00:01:10.131 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 00:01:10.131901 | orchestrator | 00:01:10.131 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 00:01:10.131945 | orchestrator | 00:01:10.131 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 00:01:10.132003 | orchestrator | 00:01:10.131 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.132021 | orchestrator | 00:01:10.131 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 00:01:10.132077 | orchestrator | 00:01:10.132 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 00:01:10.132094 | orchestrator | 00:01:10.132 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 00:01:10.132147 | orchestrator | 00:01:10.132 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 00:01:10.132164 | orchestrator | 00:01:10.132 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.132207 | orchestrator | 00:01:10.132 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 00:01:10.132224 | orchestrator | 00:01:10.132 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 00:01:10.132277 | orchestrator | 00:01:10.132 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 00:01:10.132294 | orchestrator | 00:01:10.132 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 00:01:10.132339 | orchestrator | 00:01:10.132 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.132356 | orchestrator | 00:01:10.132 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 00:01:10.132412 | orchestrator | 00:01:10.132 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.132430 | orchestrator | 00:01:10.132 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.132474 | orchestrator | 00:01:10.132 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-25 00:01:10.132487 | orchestrator | 00:01:10.132 STDOUT terraform:  } 2025-05-25 00:01:10.132502 | orchestrator | 00:01:10.132 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.132622 | orchestrator | 00:01:10.132 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-25 00:01:10.132663 | orchestrator | 00:01:10.132 STDOUT terraform:  } 2025-05-25 00:01:10.132675 | orchestrator | 00:01:10.132 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.132686 | orchestrator | 00:01:10.132 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 00:01:10.132697 | orchestrator | 00:01:10.132 STDOUT terraform:  } 2025-05-25 00:01:10.132712 | orchestrator | 00:01:10.132 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.132724 | orchestrator | 00:01:10.132 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-25 00:01:10.132735 | orchestrator | 00:01:10.132 STDOUT terraform:  } 2025-05-25 00:01:10.132747 | orchestrator | 00:01:10.132 STDOUT terraform:  + binding (known after apply) 2025-05-25 00:01:10.132761 | orchestrator | 00:01:10.132 STDOUT terraform:  + fixed_ip { 2025-05-25 00:01:10.132773 | orchestrator | 00:01:10.132 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-25 00:01:10.132787 | orchestrator | 00:01:10.132 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 00:01:10.132802 | orchestrator | 00:01:10.132 STDOUT terraform:  } 2025-05-25 00:01:10.132814 | orchestrator | 00:01:10.132 STDOUT terraform:  } 2025-05-25 00:01:10.132908 | orchestrator | 00:01:10.132 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-25 00:01:10.132924 | orchestrator | 00:01:10.132 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-25 00:01:10.132939 | orchestrator | 00:01:10.132 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 00:01:10.132954 | orchestrator | 00:01:10.132 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 00:01:10.133005 | orchestrator | 00:01:10.132 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 00:01:10.133034 | orchestrator | 00:01:10.132 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.133074 | orchestrator | 00:01:10.133 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 00:01:10.133090 | orchestrator | 00:01:10.133 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 00:01:10.133146 | orchestrator | 00:01:10.133 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 00:01:10.133165 | orchestrator | 00:01:10.133 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 00:01:10.133204 | orchestrator | 00:01:10.133 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.133243 | orchestrator | 00:01:10.133 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 00:01:10.133259 | orchestrator | 00:01:10.133 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 00:01:10.133311 | orchestrator | 00:01:10.133 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 00:01:10.133327 | orchestrator | 00:01:10.133 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 00:01:10.133379 | orchestrator | 00:01:10.133 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.133395 | orchestrator | 00:01:10.133 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 00:01:10.133448 | orchestrator | 00:01:10.133 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.133465 | orchestrator | 00:01:10.133 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.133479 | orchestrator | 00:01:10.133 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-25 00:01:10.133493 | orchestrator | 00:01:10.133 STDOUT terraform:  } 2025-05-25 00:01:10.133507 | orchestrator | 00:01:10.133 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.133556 | orchestrator | 00:01:10.133 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-25 00:01:10.133570 | orchestrator | 00:01:10.133 STDOUT terraform:  } 2025-05-25 00:01:10.133582 | orchestrator | 00:01:10.133 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.133597 | orchestrator | 00:01:10.133 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 00:01:10.133608 | orchestrator | 00:01:10.133 STDOUT terraform:  } 2025-05-25 00:01:10.133632 | orchestrator | 00:01:10.133 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.133702 | orchestrator | 00:01:10.133 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-25 00:01:10.133718 | orchestrator | 00:01:10.133 STDOUT terraform:  } 2025-05-25 00:01:10.133756 | orchestrator | 00:01:10.133 STDOUT terraform:  + binding (known after apply) 2025-05-25 00:01:10.133772 | orchestrator | 00:01:10.133 STDOUT terraform:  + fixed_ip { 2025-05-25 00:01:10.133809 | orchestrator | 00:01:10.133 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-25 00:01:10.133825 | orchestrator | 00:01:10.133 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 00:01:10.133840 | orchestrator | 00:01:10.133 STDOUT terraform:  } 2025-05-25 00:01:10.133859 | orchestrator | 00:01:10.133 STDOUT terraform:  } 2025-05-25 00:01:10.133900 | orchestrator | 00:01:10.133 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-25 00:01:10.133939 | orchestrator | 00:01:10.133 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-25 00:01:10.133978 | orchestrator | 00:01:10.133 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 00:01:10.133994 | orchestrator | 00:01:10.133 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 00:01:10.134119 | orchestrator | 00:01:10.133 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 00:01:10.134152 | orchestrator | 00:01:10.134 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.134162 | orchestrator | 00:01:10.134 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 00:01:10.134166 | orchestrator | 00:01:10.134 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 00:01:10.134194 | orchestrator | 00:01:10.134 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 00:01:10.134231 | orchestrator | 00:01:10.134 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 00:01:10.134273 | orchestrator | 00:01:10.134 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.134305 | orchestrator | 00:01:10.134 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 00:01:10.134343 | orchestrator | 00:01:10.134 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 00:01:10.134379 | orchestrator | 00:01:10.134 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 00:01:10.134414 | orchestrator | 00:01:10.134 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 00:01:10.134452 | orchestrator | 00:01:10.134 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.134490 | orchestrator | 00:01:10.134 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 00:01:10.134526 | orchestrator | 00:01:10.134 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.134548 | orchestrator | 00:01:10.134 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.134578 | orchestrator | 00:01:10.134 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-25 00:01:10.134585 | orchestrator | 00:01:10.134 STDOUT terraform:  } 2025-05-25 00:01:10.134609 | orchestrator | 00:01:10.134 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.134656 | orchestrator | 00:01:10.134 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-25 00:01:10.134683 | orchestrator | 00:01:10.134 STDOUT terraform:  } 2025-05-25 00:01:10.134704 | orchestrator | 00:01:10.134 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.134734 | orchestrator | 00:01:10.134 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 00:01:10.134740 | orchestrator | 00:01:10.134 STDOUT terraform:  } 2025-05-25 00:01:10.134767 | orchestrator | 00:01:10.134 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.134798 | orchestrator | 00:01:10.134 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-25 00:01:10.134812 | orchestrator | 00:01:10.134 STDOUT terraform:  } 2025-05-25 00:01:10.134829 | orchestrator | 00:01:10.134 STDOUT terraform:  + binding (known after apply) 2025-05-25 00:01:10.134835 | orchestrator | 00:01:10.134 STDOUT terraform:  + fixed_ip { 2025-05-25 00:01:10.134865 | orchestrator | 00:01:10.134 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-25 00:01:10.134895 | orchestrator | 00:01:10.134 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 00:01:10.134902 | orchestrator | 00:01:10.134 STDOUT terraform:  } 2025-05-25 00:01:10.134918 | orchestrator | 00:01:10.134 STDOUT terraform:  } 2025-05-25 00:01:10.134962 | orchestrator | 00:01:10.134 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-25 00:01:10.135008 | orchestrator | 00:01:10.134 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-25 00:01:10.135045 | orchestrator | 00:01:10.135 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 00:01:10.135081 | orchestrator | 00:01:10.135 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 00:01:10.135118 | orchestrator | 00:01:10.135 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 00:01:10.135155 | orchestrator | 00:01:10.135 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.135192 | orchestrator | 00:01:10.135 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 00:01:10.135228 | orchestrator | 00:01:10.135 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 00:01:10.135264 | orchestrator | 00:01:10.135 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 00:01:10.135300 | orchestrator | 00:01:10.135 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 00:01:10.135338 | orchestrator | 00:01:10.135 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.135374 | orchestrator | 00:01:10.135 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 00:01:10.135412 | orchestrator | 00:01:10.135 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 00:01:10.135447 | orchestrator | 00:01:10.135 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 00:01:10.135484 | orchestrator | 00:01:10.135 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 00:01:10.135521 | orchestrator | 00:01:10.135 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.135557 | orchestrator | 00:01:10.135 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 00:01:10.135595 | orchestrator | 00:01:10.135 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.135611 | orchestrator | 00:01:10.135 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.135650 | orchestrator | 00:01:10.135 STDOUT terraform:  + ip_address = "192.168.112.0/2 2025-05-25 00:01:10.135707 | orchestrator | 00:01:10.135 STDOUT terraform: 0" 2025-05-25 00:01:10.135713 | orchestrator | 00:01:10.135 STDOUT terraform:  } 2025-05-25 00:01:10.135737 | orchestrator | 00:01:10.135 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.135768 | orchestrator | 00:01:10.135 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-25 00:01:10.135774 | orchestrator | 00:01:10.135 STDOUT terraform:  } 2025-05-25 00:01:10.135798 | orchestrator | 00:01:10.135 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.135827 | orchestrator | 00:01:10.135 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 00:01:10.135833 | orchestrator | 00:01:10.135 STDOUT terraform:  } 2025-05-25 00:01:10.135857 | orchestrator | 00:01:10.135 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.135885 | orchestrator | 00:01:10.135 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-25 00:01:10.135891 | orchestrator | 00:01:10.135 STDOUT terraform:  } 2025-05-25 00:01:10.135920 | orchestrator | 00:01:10.135 STDOUT terraform:  + binding (known after apply) 2025-05-25 00:01:10.135926 | orchestrator | 00:01:10.135 STDOUT terraform:  + fixed_ip { 2025-05-25 00:01:10.135958 | orchestrator | 00:01:10.135 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-25 00:01:10.135987 | orchestrator | 00:01:10.135 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 00:01:10.135994 | orchestrator | 00:01:10.135 STDOUT terraform:  } 2025-05-25 00:01:10.136010 | orchestrator | 00:01:10.135 STDOUT terraform:  } 2025-05-25 00:01:10.136055 | orchestrator | 00:01:10.136 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-25 00:01:10.136099 | orchestrator | 00:01:10.136 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-25 00:01:10.136135 | orchestrator | 00:01:10.136 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 00:01:10.136172 | orchestrator | 00:01:10.136 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 00:01:10.136208 | orchestrator | 00:01:10.136 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 00:01:10.136247 | orchestrator | 00:01:10.136 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.136280 | orchestrator | 00:01:10.136 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 00:01:10.136316 | orchestrator | 00:01:10.136 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 00:01:10.136352 | orchestrator | 00:01:10.136 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 00:01:10.136390 | orchestrator | 00:01:10.136 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 00:01:10.136427 | orchestrator | 00:01:10.136 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.136463 | orchestrator | 00:01:10.136 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 00:01:10.136499 | orchestrator | 00:01:10.136 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 00:01:10.136533 | orchestrator | 00:01:10.136 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 00:01:10.136570 | orchestrator | 00:01:10.136 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 00:01:10.136609 | orchestrator | 00:01:10.136 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.136655 | orchestrator | 00:01:10.136 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 00:01:10.136781 | orchestrator | 00:01:10.136 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.136819 | orchestrator | 00:01:10.136 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.136830 | orchestrator | 00:01:10.136 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-25 00:01:10.136841 | orchestrator | 00:01:10.136 STDOUT terraform:  } 2025-05-25 00:01:10.136852 | orchestrator | 00:01:10.136 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.136876 | orchestrator | 00:01:10.136 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-25 00:01:10.136900 | orchestrator | 00:01:10.136 STDOUT terraform:  } 2025-05-25 00:01:10.136919 | orchestrator | 00:01:10.136 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.136935 | orchestrator | 00:01:10.136 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 00:01:10.136950 | orchestrator | 00:01:10.136 STDOUT terraform:  } 2025-05-25 00:01:10.136967 | orchestrator | 00:01:10.136 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.136991 | orchestrator | 00:01:10.136 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-25 00:01:10.137009 | orchestrator | 00:01:10.136 STDOUT terraform:  } 2025-05-25 00:01:10.137028 | orchestrator | 00:01:10.136 STDOUT terraform:  + binding (known after apply) 2025-05-25 00:01:10.137047 | orchestrator | 00:01:10.136 STDOUT terraform:  + fixed_ip { 2025-05-25 00:01:10.137066 | orchestrator | 00:01:10.136 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-25 00:01:10.137085 | orchestrator | 00:01:10.136 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 00:01:10.137103 | orchestrator | 00:01:10.136 STDOUT terraform:  } 2025-05-25 00:01:10.137122 | orchestrator | 00:01:10.136 STDOUT terraform:  } 2025-05-25 00:01:10.137146 | orchestrator | 00:01:10.136 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-25 00:01:10.137166 | orchestrator | 00:01:10.137 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-25 00:01:10.137186 | orchestrator | 00:01:10.137 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 00:01:10.137206 | orchestrator | 00:01:10.137 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-25 00:01:10.137225 | orchestrator | 00:01:10.137 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-25 00:01:10.137236 | orchestrator | 00:01:10.137 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.137251 | orchestrator | 00:01:10.137 STDOUT terraform:  + device_id = (known after apply) 2025-05-25 00:01:10.137265 | orchestrator | 00:01:10.137 STDOUT terraform:  + device_owner = (known after apply) 2025-05-25 00:01:10.137316 | orchestrator | 00:01:10.137 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-25 00:01:10.137332 | orchestrator | 00:01:10.137 STDOUT terraform:  + dns_name = (known after apply) 2025-05-25 00:01:10.137382 | orchestrator | 00:01:10.137 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.137412 | orchestrator | 00:01:10.137 STDOUT terraform:  + mac_address = (known after apply) 2025-05-25 00:01:10.137464 | orchestrator | 00:01:10.137 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 00:01:10.137480 | orchestrator | 00:01:10.137 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-25 00:01:10.137519 | orchestrator | 00:01:10.137 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-25 00:01:10.137572 | orchestrator | 00:01:10.137 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.137627 | orchestrator | 00:01:10.137 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-25 00:01:10.137696 | orchestrator | 00:01:10.137 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.137721 | orchestrator | 00:01:10.137 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.137737 | orchestrator | 00:01:10.137 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-25 00:01:10.137748 | orchestrator | 00:01:10.137 STDOUT terraform:  } 2025-05-25 00:01:10.137763 | orchestrator | 00:01:10.137 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.137814 | orchestrator | 00:01:10.137 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-25 00:01:10.137827 | orchestrator | 00:01:10.137 STDOUT terraform:  } 2025-05-25 00:01:10.137838 | orchestrator | 00:01:10.137 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.137853 | orchestrator | 00:01:10.137 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-25 00:01:10.137865 | orchestrator | 00:01:10.137 STDOUT terraform:  } 2025-05-25 00:01:10.137888 | orchestrator | 00:01:10.137 STDOUT terraform:  + allowed_address_pairs { 2025-05-25 00:01:10.137918 | orchestrator | 00:01:10.137 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-25 00:01:10.137944 | orchestrator | 00:01:10.137 STDOUT terraform:  } 2025-05-25 00:01:10.137965 | orchestrator | 00:01:10.137 STDOUT terraform:  + binding (known after apply) 2025-05-25 00:01:10.137983 | orchestrator | 00:01:10.137 STDOUT terraform:  + fixed_ip { 2025-05-25 00:01:10.138001 | orchestrator | 00:01:10.137 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-25 00:01:10.138058 | orchestrator | 00:01:10.137 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 00:01:10.138078 | orchestrator | 00:01:10.137 STDOUT terraform:  } 2025-05-25 00:01:10.138094 | orchestrator | 00:01:10.137 STDOUT terraform:  } 2025-05-25 00:01:10.138115 | orchestrator | 00:01:10.137 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-25 00:01:10.138132 | orchestrator | 00:01:10.138 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-25 00:01:10.138154 | orchestrator | 00:01:10.138 STDOUT terraform:  + force_destroy = false 2025-05-25 00:01:10.138176 | orchestrator | 00:01:10.138 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.138198 | orchestrator | 00:01:10.138 STDOUT terraform:  + port_id = (known after apply) 2025-05-25 00:01:10.138220 | orchestrator | 00:01:10.138 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.138242 | orchestrator | 00:01:10.138 STDOUT terraform:  + router_id = (known after apply) 2025-05-25 00:01:10.138280 | orchestrator | 00:01:10.138 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-25 00:01:10.138301 | orchestrator | 00:01:10.138 STDOUT terraform:  } 2025-05-25 00:01:10.138323 | orchestrator | 00:01:10.138 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-25 00:01:10.138338 | orchestrator | 00:01:10.138 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-25 00:01:10.138387 | orchestrator | 00:01:10.138 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-25 00:01:10.138416 | orchestrator | 00:01:10.138 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.138432 | orchestrator | 00:01:10.138 STDOUT terraform:  + availability_zone_hints = [ 2025-05-25 00:01:10.138446 | orchestrator | 00:01:10.138 STDOUT terraform:  + "nova", 2025-05-25 00:01:10.138464 | orchestrator | 00:01:10.138 STDOUT terraform:  ] 2025-05-25 00:01:10.138500 | orchestrator | 00:01:10.138 STDOUT terraform:  + distributed = (known after apply) 2025-05-25 00:01:10.138536 | orchestrator | 00:01:10.138 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-25 00:01:10.138584 | orchestrator | 00:01:10.138 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-25 00:01:10.138622 | orchestrator | 00:01:10.138 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.138678 | orchestrator | 00:01:10.138 STDOUT terraform:  + name = "testbed" 2025-05-25 00:01:10.138726 | orchestrator | 00:01:10.138 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.138781 | orchestrator | 00:01:10.138 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.138823 | orchestrator | 00:01:10.138 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-25 00:01:10.138876 | orchestrator | 00:01:10.138 STDOUT terraform:  } 2025-05-25 00:01:10.138920 | orchestrator | 00:01:10.138 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-25 00:01:10.139005 | orchestrator | 00:01:10.138 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-25 00:01:10.139042 | orchestrator | 00:01:10.138 STDOUT terraform:  + description = "ssh" 2025-05-25 00:01:10.139079 | orchestrator | 00:01:10.139 STDOUT terraform:  + direction = "ingress" 2025-05-25 00:01:10.139204 | orchestrator | 00:01:10.139 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 00:01:10.139218 | orchestrator | 00:01:10.139 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.139229 | orchestrator | 00:01:10.139 STDOUT terraform:  + port_range_max = 22 2025-05-25 00:01:10.139240 | orchestrator | 00:01:10.139 STDOUT terraform:  + port_range_min = 22 2025-05-25 00:01:10.139254 | orchestrator | 00:01:10.139 STDOUT terraform:  + protocol = "tcp" 2025-05-25 00:01:10.139294 | orchestrator | 00:01:10.139 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.139311 | orchestrator | 00:01:10.139 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 00:01:10.139465 | orchestrator | 00:01:10.139 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 00:01:10.139485 | orchestrator | 00:01:10.139 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 00:01:10.139497 | orchestrator | 00:01:10.139 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.140245 | orchestrator | 00:01:10.139 STDOUT terraform:  } 2025-05-25 00:01:10.140287 | orchestrator | 00:01:10.139 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-25 00:01:10.140300 | orchestrator | 00:01:10.139 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-25 00:01:10.140312 | orchestrator | 00:01:10.139 STDOUT terraform:  + description = "wireguard" 2025-05-25 00:01:10.140337 | orchestrator | 00:01:10.139 STDOUT terraform:  + direction = "ingress" 2025-05-25 00:01:10.140349 | orchestrator | 00:01:10.139 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 00:01:10.140360 | orchestrator | 00:01:10.139 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.140371 | orchestrator | 00:01:10.139 STDOUT terraform:  + port_range_max = 51820 2025-05-25 00:01:10.140382 | orchestrator | 00:01:10.139 STDOUT terraform:  + port_range_min = 51820 2025-05-25 00:01:10.140393 | orchestrator | 00:01:10.139 STDOUT terraform:  + protocol = "udp" 2025-05-25 00:01:10.140403 | orchestrator | 00:01:10.139 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.140414 | orchestrator | 00:01:10.139 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 00:01:10.140425 | orchestrator | 00:01:10.139 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 00:01:10.140436 | orchestrator | 00:01:10.139 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 00:01:10.140447 | orchestrator | 00:01:10.139 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.140471 | orchestrator | 00:01:10.139 STDOUT terraform:  } 2025-05-25 00:01:10.140483 | orchestrator | 00:01:10.139 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-25 00:01:10.140494 | orchestrator | 00:01:10.139 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-25 00:01:10.140505 | orchestrator | 00:01:10.139 STDOUT terraform:  + direction = "ingress" 2025-05-25 00:01:10.140516 | orchestrator | 00:01:10.139 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 00:01:10.140532 | orchestrator | 00:01:10.139 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.140543 | orchestrator | 00:01:10.139 STDOUT terraform:  + protocol = "tcp" 2025-05-25 00:01:10.140554 | orchestrator | 00:01:10.139 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.140565 | orchestrator | 00:01:10.139 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 00:01:10.140576 | orchestrator | 00:01:10.139 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-25 00:01:10.140587 | orchestrator | 00:01:10.140 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 00:01:10.140598 | orchestrator | 00:01:10.140 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.140625 | orchestrator | 00:01:10.140 STDOUT terraform:  } 2025-05-25 00:01:10.140637 | orchestrator | 00:01:10.140 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-25 00:01:10.140731 | orchestrator | 00:01:10.140 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-25 00:01:10.140745 | orchestrator | 00:01:10.140 STDOUT terraform:  + direction = "ingress" 2025-05-25 00:01:10.140757 | orchestrator | 00:01:10.140 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 00:01:10.140768 | orchestrator | 00:01:10.140 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.140778 | orchestrator | 00:01:10.140 STDOUT terraform:  + protocol = "udp" 2025-05-25 00:01:10.140789 | orchestrator | 00:01:10.140 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.140800 | orchestrator | 00:01:10.140 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 00:01:10.140811 | orchestrator | 00:01:10.140 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-25 00:01:10.140828 | orchestrator | 00:01:10.140 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 00:01:10.140839 | orchestrator | 00:01:10.140 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.140850 | orchestrator | 00:01:10.140 STDOUT terraform:  } 2025-05-25 00:01:10.140861 | orchestrator | 00:01:10.140 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-25 00:01:10.140872 | orchestrator | 00:01:10.140 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-25 00:01:10.140883 | orchestrator | 00:01:10.140 STDOUT terraform:  + direction = "ingress" 2025-05-25 00:01:10.140894 | orchestrator | 00:01:10.140 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 00:01:10.140905 | orchestrator | 00:01:10.140 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.140916 | orchestrator | 00:01:10.140 STDOUT terraform:  + protocol = "icmp" 2025-05-25 00:01:10.140927 | orchestrator | 00:01:10.140 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.140937 | orchestrator | 00:01:10.140 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 00:01:10.140948 | orchestrator | 00:01:10.140 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 00:01:10.140959 | orchestrator | 00:01:10.140 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 00:01:10.140970 | orchestrator | 00:01:10.140 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.140981 | orchestrator | 00:01:10.140 STDOUT terraform:  } 2025-05-25 00:01:10.140992 | orchestrator | 00:01:10.140 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-25 00:01:10.141003 | orchestrator | 00:01:10.140 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-25 00:01:10.141014 | orchestrator | 00:01:10.140 STDOUT terraform:  + direction = "ingress" 2025-05-25 00:01:10.141029 | orchestrator | 00:01:10.140 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 00:01:10.141048 | orchestrator | 00:01:10.140 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.141065 | orchestrator | 00:01:10.140 STDOUT terraform:  + protocol = "tcp" 2025-05-25 00:01:10.141076 | orchestrator | 00:01:10.140 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.141087 | orchestrator | 00:01:10.140 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 00:01:10.141098 | orchestrator | 00:01:10.140 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 00:01:10.141109 | orchestrator | 00:01:10.140 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 00:01:10.141119 | orchestrator | 00:01:10.140 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.141130 | orchestrator | 00:01:10.141 STDOUT terraform:  } 2025-05-25 00:01:10.141145 | orchestrator | 00:01:10.141 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-25 00:01:10.141157 | orchestrator | 00:01:10.141 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-25 00:01:10.141168 | orchestrator | 00:01:10.141 STDOUT terraform:  + direction = "ingress" 2025-05-25 00:01:10.141182 | orchestrator | 00:01:10.141 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 00:01:10.141193 | orchestrator | 00:01:10.141 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.141208 | orchestrator | 00:01:10.141 STDOUT terraform:  + protocol = "udp" 2025-05-25 00:01:10.141223 | orchestrator | 00:01:10.141 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.141259 | orchestrator | 00:01:10.141 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 00:01:10.141275 | orchestrator | 00:01:10.141 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 00:01:10.143905 | orchestrator | 00:01:10.141 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 00:01:10.143951 | orchestrator | 00:01:10.141 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.143960 | orchestrator | 00:01:10.141 STDOUT terraform:  } 2025-05-25 00:01:10.143967 | orchestrator | 00:01:10.141 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-25 00:01:10.143975 | orchestrator | 00:01:10.141 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-25 00:01:10.143982 | orchestrator | 00:01:10.141 STDOUT terraform:  + direction = "ingress" 2025-05-25 00:01:10.143988 | orchestrator | 00:01:10.141 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 00:01:10.143994 | orchestrator | 00:01:10.141 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.144000 | orchestrator | 00:01:10.141 STDOUT terraform:  + protocol = "icmp" 2025-05-25 00:01:10.144007 | orchestrator | 00:01:10.141 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.144013 | orchestrator | 00:01:10.141 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 00:01:10.144019 | orchestrator | 00:01:10.141 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 00:01:10.144037 | orchestrator | 00:01:10.141 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 00:01:10.144043 | orchestrator | 00:01:10.141 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.144050 | orchestrator | 00:01:10.141 STDOUT terraform:  } 2025-05-25 00:01:10.144056 | orchestrator | 00:01:10.141 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-25 00:01:10.144063 | orchestrator | 00:01:10.141 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-25 00:01:10.144070 | orchestrator | 00:01:10.141 STDOUT terraform:  + description = "vrrp" 2025-05-25 00:01:10.144076 | orchestrator | 00:01:10.141 STDOUT terraform:  + direction = "ingress" 2025-05-25 00:01:10.144081 | orchestrator | 00:01:10.141 STDOUT terraform:  + ethertype = "IPv4" 2025-05-25 00:01:10.144090 | orchestrator | 00:01:10.141 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.144099 | orchestrator | 00:01:10.141 STDOUT terraform:  + protocol = "112" 2025-05-25 00:01:10.144104 | orchestrator | 00:01:10.141 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.144107 | orchestrator | 00:01:10.141 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-25 00:01:10.144111 | orchestrator | 00:01:10.141 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-25 00:01:10.144115 | orchestrator | 00:01:10.141 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-25 00:01:10.144119 | orchestrator | 00:01:10.141 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.144123 | orchestrator | 00:01:10.141 STDOUT terraform:  } 2025-05-25 00:01:10.144127 | orchestrator | 00:01:10.142 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-25 00:01:10.144131 | orchestrator | 00:01:10.142 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-25 00:01:10.144135 | orchestrator | 00:01:10.142 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.144138 | orchestrator | 00:01:10.142 STDOUT terraform:  + description = "management security group" 2025-05-25 00:01:10.144142 | orchestrator | 00:01:10.142 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.144146 | orchestrator | 00:01:10.142 STDOUT terraform:  + name = "testbed-management" 2025-05-25 00:01:10.144150 | orchestrator | 00:01:10.142 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.144154 | orchestrator | 00:01:10.142 STDOUT terraform:  + stateful = (known after apply) 2025-05-25 00:01:10.144158 | orchestrator | 00:01:10.142 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.144171 | orchestrator | 00:01:10.142 STDOUT terraform:  } 2025-05-25 00:01:10.144175 | orchestrator | 00:01:10.142 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-25 00:01:10.144179 | orchestrator | 00:01:10.142 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-25 00:01:10.144183 | orchestrator | 00:01:10.142 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.144187 | orchestrator | 00:01:10.142 STDOUT terraform:  + description = "node security group" 2025-05-25 00:01:10.144194 | orchestrator | 00:01:10.142 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.144198 | orchestrator | 00:01:10.142 STDOUT terraform:  + name = "testbed-node" 2025-05-25 00:01:10.144202 | orchestrator | 00:01:10.142 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.144206 | orchestrator | 00:01:10.142 STDOUT terraform:  + stateful = (known after apply) 2025-05-25 00:01:10.144210 | orchestrator | 00:01:10.142 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.144213 | orchestrator | 00:01:10.142 STDOUT terraform:  } 2025-05-25 00:01:10.144217 | orchestrator | 00:01:10.142 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-25 00:01:10.144221 | orchestrator | 00:01:10.142 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-25 00:01:10.144225 | orchestrator | 00:01:10.142 STDOUT terraform:  + all_tags = (known after apply) 2025-05-25 00:01:10.144229 | orchestrator | 00:01:10.142 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-25 00:01:10.144233 | orchestrator | 00:01:10.142 STDOUT terraform:  + dns_nameservers = [ 2025-05-25 00:01:10.144237 | orchestrator | 00:01:10.143 STDOUT terraform:  + "8.8.8.8", 2025-05-25 00:01:10.144241 | orchestrator | 00:01:10.143 STDOUT terraform:  + "9.9.9.9", 2025-05-25 00:01:10.144244 | orchestrator | 00:01:10.143 STDOUT terraform:  ] 2025-05-25 00:01:10.144248 | orchestrator | 00:01:10.143 STDOUT terraform:  + enable_dhcp = true 2025-05-25 00:01:10.144252 | orchestrator | 00:01:10.143 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-25 00:01:10.144256 | orchestrator | 00:01:10.143 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.144260 | orchestrator | 00:01:10.143 STDOUT terraform:  + ip_version = 4 2025-05-25 00:01:10.144264 | orchestrator | 00:01:10.143 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-25 00:01:10.144268 | orchestrator | 00:01:10.143 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-25 00:01:10.144272 | orchestrator | 00:01:10.143 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-25 00:01:10.144276 | orchestrator | 00:01:10.143 STDOUT terraform:  + network_id = (known after apply) 2025-05-25 00:01:10.144279 | orchestrator | 00:01:10.143 STDOUT terraform:  + no_gateway = false 2025-05-25 00:01:10.144283 | orchestrator | 00:01:10.143 STDOUT terraform:  + region = (known after apply) 2025-05-25 00:01:10.144287 | orchestrator | 00:01:10.143 STDOUT terraform:  + service_types = (known after apply) 2025-05-25 00:01:10.144291 | orchestrator | 00:01:10.143 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-25 00:01:10.144295 | orchestrator | 00:01:10.143 STDOUT terraform:  + allocation_pool { 2025-05-25 00:01:10.144299 | orchestrator | 00:01:10.143 STDOUT terraform:  + end = "192.168.31.250" 2025-05-25 00:01:10.144303 | orchestrator | 00:01:10.143 STDOUT terraform:  + start = "192.168.31.200" 2025-05-25 00:01:10.144307 | orchestrator | 00:01:10.143 STDOUT terraform:  } 2025-05-25 00:01:10.144311 | orchestrator | 00:01:10.143 STDOUT terraform:  } 2025-05-25 00:01:10.144322 | orchestrator | 00:01:10.143 STDOUT terraform:  # terraform_data.image will be created 2025-05-25 00:01:10.144348 | orchestrator | 00:01:10.143 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-25 00:01:10.144352 | orchestrator | 00:01:10.143 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.144359 | orchestrator | 00:01:10.143 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-25 00:01:10.144363 | orchestrator | 00:01:10.143 STDOUT terraform:  + output = (known after apply) 2025-05-25 00:01:10.144367 | orchestrator | 00:01:10.143 STDOUT terraform:  } 2025-05-25 00:01:10.144371 | orchestrator | 00:01:10.143 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-25 00:01:10.144375 | orchestrator | 00:01:10.143 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-25 00:01:10.144379 | orchestrator | 00:01:10.143 STDOUT terraform:  + id = (known after apply) 2025-05-25 00:01:10.144382 | orchestrator | 00:01:10.143 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-25 00:01:10.144386 | orchestrator | 00:01:10.143 STDOUT terraform:  + output = (known after apply) 2025-05-25 00:01:10.144390 | orchestrator | 00:01:10.143 STDOUT terraform:  } 2025-05-25 00:01:10.144394 | orchestrator | 00:01:10.143 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-25 00:01:10.144398 | orchestrator | 00:01:10.143 STDOUT terraform: Changes to Outputs: 2025-05-25 00:01:10.144402 | orchestrator | 00:01:10.143 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-25 00:01:10.144406 | orchestrator | 00:01:10.143 STDOUT terraform:  + private_key = (sensitive value) 2025-05-25 00:01:10.360483 | orchestrator | 00:01:10.360 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-25 00:01:10.360568 | orchestrator | 00:01:10.360 STDOUT terraform: terraform_data.image: Creating... 2025-05-25 00:01:10.360590 | orchestrator | 00:01:10.360 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=e55e3339-be7a-ffec-578d-eb829fdad773] 2025-05-25 00:01:10.360721 | orchestrator | 00:01:10.360 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=95da7faf-6521-f582-2ccd-d164413929db] 2025-05-25 00:01:10.379733 | orchestrator | 00:01:10.379 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-25 00:01:10.380478 | orchestrator | 00:01:10.380 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-25 00:01:10.387299 | orchestrator | 00:01:10.387 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-25 00:01:10.388464 | orchestrator | 00:01:10.388 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-25 00:01:10.389190 | orchestrator | 00:01:10.389 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-25 00:01:10.391755 | orchestrator | 00:01:10.391 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-25 00:01:10.392378 | orchestrator | 00:01:10.392 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-25 00:01:10.393445 | orchestrator | 00:01:10.393 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-25 00:01:10.393553 | orchestrator | 00:01:10.393 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-25 00:01:10.395797 | orchestrator | 00:01:10.395 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-25 00:01:10.825785 | orchestrator | 00:01:10.825 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-25 00:01:10.833383 | orchestrator | 00:01:10.833 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-25 00:01:10.865161 | orchestrator | 00:01:10.864 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-05-25 00:01:10.873861 | orchestrator | 00:01:10.873 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-25 00:01:11.159675 | orchestrator | 00:01:11.159 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-25 00:01:11.167617 | orchestrator | 00:01:11.167 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-25 00:01:16.331464 | orchestrator | 00:01:16.331 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=ed4599d6-d13b-46c0-a102-72cb7d3bd02a] 2025-05-25 00:01:16.344535 | orchestrator | 00:01:16.344 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-25 00:01:20.389890 | orchestrator | 00:01:20.389 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-25 00:01:20.395279 | orchestrator | 00:01:20.394 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-25 00:01:20.396176 | orchestrator | 00:01:20.395 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-25 00:01:20.396268 | orchestrator | 00:01:20.395 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-25 00:01:20.396285 | orchestrator | 00:01:20.396 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-25 00:01:20.397313 | orchestrator | 00:01:20.397 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-25 00:01:20.833936 | orchestrator | 00:01:20.833 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-25 00:01:20.874209 | orchestrator | 00:01:20.873 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-25 00:01:20.972117 | orchestrator | 00:01:20.971 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=a7a2bb5e-544e-42c6-9dad-0ece7cbc632c] 2025-05-25 00:01:20.984323 | orchestrator | 00:01:20.984 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-25 00:01:20.991562 | orchestrator | 00:01:20.991 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=c8fbae2968fdb9c3bd2c050ab0206d3bb5e8783f] 2025-05-25 00:01:20.996493 | orchestrator | 00:01:20.996 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=00903628-efdf-425a-bac1-d89af04936e9] 2025-05-25 00:01:21.003432 | orchestrator | 00:01:21.003 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-25 00:01:21.004758 | orchestrator | 00:01:21.004 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-25 00:01:21.009366 | orchestrator | 00:01:21.009 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=b4cdb2bf-93fc-4f18-bc4f-5ab68c384bd6] 2025-05-25 00:01:21.011344 | orchestrator | 00:01:21.011 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=5d6b2858-a2bf-4730-a36e-7c509d6038b8] 2025-05-25 00:01:21.013026 | orchestrator | 00:01:21.012 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=7693b42064dd9a6ee8ec02877e566678bbf44fa4] 2025-05-25 00:01:21.020588 | orchestrator | 00:01:21.020 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-25 00:01:21.020972 | orchestrator | 00:01:21.020 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-25 00:01:21.022295 | orchestrator | 00:01:21.022 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-25 00:01:21.023349 | orchestrator | 00:01:21.023 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=45989edd-037d-47c1-af48-ae55f96e814d] 2025-05-25 00:01:21.025757 | orchestrator | 00:01:21.025 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=5104b556-d7c3-42e9-9230-39ae2abd74e9] 2025-05-25 00:01:21.027284 | orchestrator | 00:01:21.027 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-25 00:01:21.029163 | orchestrator | 00:01:21.029 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-25 00:01:21.095219 | orchestrator | 00:01:21.094 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=f90c35ea-44f5-4677-8ded-e7e6ddf8d55d] 2025-05-25 00:01:21.102907 | orchestrator | 00:01:21.102 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=70c7a39a-01cf-4431-b65e-7bc8a8e29825] 2025-05-25 00:01:21.106500 | orchestrator | 00:01:21.106 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-25 00:01:21.168782 | orchestrator | 00:01:21.168 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-25 00:01:21.344884 | orchestrator | 00:01:21.344 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=a4234bd8-7c33-4d3a-bb78-5919196abab5] 2025-05-25 00:01:26.348020 | orchestrator | 00:01:26.347 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-25 00:01:26.658972 | orchestrator | 00:01:26.658 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=3b5dda4f-f5fd-43ac-b637-e14ecdb197d0] 2025-05-25 00:01:26.973823 | orchestrator | 00:01:26.973 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=add0e5d9-7457-4d5d-bf49-eb6d7137db27] 2025-05-25 00:01:26.980210 | orchestrator | 00:01:26.980 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-25 00:01:31.005686 | orchestrator | 00:01:31.005 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-25 00:01:31.021786 | orchestrator | 00:01:31.021 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-25 00:01:31.021908 | orchestrator | 00:01:31.021 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-25 00:01:31.022919 | orchestrator | 00:01:31.022 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-25 00:01:31.028338 | orchestrator | 00:01:31.028 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-25 00:01:31.030578 | orchestrator | 00:01:31.030 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-25 00:01:31.373040 | orchestrator | 00:01:31.372 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=837412a5-fe4a-44e8-b41a-275c23b45357] 2025-05-25 00:01:31.382752 | orchestrator | 00:01:31.382 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=2e42b604-2874-4965-a971-13f8550546b1] 2025-05-25 00:01:31.385884 | orchestrator | 00:01:31.385 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=9fe04323-1149-4ab9-818d-0974511b9fdf] 2025-05-25 00:01:31.425496 | orchestrator | 00:01:31.425 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=6eb15f43-4781-46a1-a915-57ab21ed02ae] 2025-05-25 00:01:31.426068 | orchestrator | 00:01:31.425 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=eb7c7597-082a-4802-b2b2-08165cf24c9b] 2025-05-25 00:01:31.439641 | orchestrator | 00:01:31.439 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=eeee712c-196d-42b2-b707-3a3109b31946] 2025-05-25 00:01:34.836148 | orchestrator | 00:01:34.835 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=8707e492-a712-458e-b356-5a40b19d6cd8] 2025-05-25 00:01:34.843640 | orchestrator | 00:01:34.843 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-25 00:01:34.844442 | orchestrator | 00:01:34.844 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-25 00:01:34.847484 | orchestrator | 00:01:34.847 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-25 00:01:35.039372 | orchestrator | 00:01:35.038 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=1b7d0f11-3c8b-4db0-a1f4-be8b030f217c] 2025-05-25 00:01:35.056120 | orchestrator | 00:01:35.055 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-25 00:01:35.060705 | orchestrator | 00:01:35.060 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-25 00:01:35.061447 | orchestrator | 00:01:35.061 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-25 00:01:35.062528 | orchestrator | 00:01:35.062 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-25 00:01:35.063754 | orchestrator | 00:01:35.063 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-25 00:01:35.071047 | orchestrator | 00:01:35.070 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-25 00:01:35.072190 | orchestrator | 00:01:35.072 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=d51c0d95-1a0c-4c54-acca-f3364c67ff0f] 2025-05-25 00:01:35.074055 | orchestrator | 00:01:35.073 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-25 00:01:35.074281 | orchestrator | 00:01:35.074 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-25 00:01:35.084155 | orchestrator | 00:01:35.084 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-25 00:01:35.529293 | orchestrator | 00:01:35.528 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=708789f0-0e4d-4c97-b07f-6e5c2cfae37a] 2025-05-25 00:01:35.543731 | orchestrator | 00:01:35.543 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-25 00:01:35.718544 | orchestrator | 00:01:35.718 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=3e399949-c784-4108-ab79-a68397e13221] 2025-05-25 00:01:35.727511 | orchestrator | 00:01:35.727 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-25 00:01:35.882124 | orchestrator | 00:01:35.881 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=58b0c76b-e489-4990-9297-31841041f2bc] 2025-05-25 00:01:35.890672 | orchestrator | 00:01:35.890 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-25 00:01:35.907582 | orchestrator | 00:01:35.907 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=b8671527-b60a-4ce2-9f41-9babc74e3a80] 2025-05-25 00:01:35.912695 | orchestrator | 00:01:35.912 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-25 00:01:36.065114 | orchestrator | 00:01:36.064 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=542bec10-9a63-4420-b5b6-02b151db5838] 2025-05-25 00:01:36.076850 | orchestrator | 00:01:36.076 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-25 00:01:36.223790 | orchestrator | 00:01:36.223 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=9c343910-d8aa-42f6-a091-7bff809f93df] 2025-05-25 00:01:36.229432 | orchestrator | 00:01:36.229 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-25 00:01:36.384353 | orchestrator | 00:01:36.383 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=257b71c9-203c-4482-af07-46b93c73e977] 2025-05-25 00:01:36.392208 | orchestrator | 00:01:36.391 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-25 00:01:36.557590 | orchestrator | 00:01:36.557 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=6fb2c115-77bd-4f02-a5fa-7b9ff24a455f] 2025-05-25 00:01:36.607260 | orchestrator | 00:01:36.606 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 2s [id=82bcc480-efaf-43b5-bf77-4bb8636b62d5] 2025-05-25 00:01:40.755658 | orchestrator | 00:01:40.755 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=e09d70f9-bc80-484b-a81b-255928b27bc2] 2025-05-25 00:01:40.771398 | orchestrator | 00:01:40.771 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=86db76b1-e0b3-4a48-ac7b-8b83e403d44b] 2025-05-25 00:01:40.797221 | orchestrator | 00:01:40.796 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=9740b160-20ae-4bf2-9c34-93e0cf302493] 2025-05-25 00:01:40.808128 | orchestrator | 00:01:40.807 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=bf45a673-8836-44ae-86cb-21bba1a4d0d8] 2025-05-25 00:01:40.816995 | orchestrator | 00:01:40.816 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=609e722c-e122-429d-8762-d6cbd398a358] 2025-05-25 00:01:41.216261 | orchestrator | 00:01:41.215 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=a22ab17a-a5ef-40a4-b39a-783f4c221e83] 2025-05-25 00:01:42.125655 | orchestrator | 00:01:42.125 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=c3117f56-8399-4fef-8027-0bf293421634] 2025-05-25 00:01:42.810755 | orchestrator | 00:01:42.810 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=37ea3665-e769-4f5e-8e9c-f23eddf59772] 2025-05-25 00:01:42.828234 | orchestrator | 00:01:42.828 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-25 00:01:42.842223 | orchestrator | 00:01:42.842 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-25 00:01:42.846516 | orchestrator | 00:01:42.846 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-25 00:01:42.847521 | orchestrator | 00:01:42.847 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-25 00:01:42.854709 | orchestrator | 00:01:42.854 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-25 00:01:42.855884 | orchestrator | 00:01:42.855 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-25 00:01:42.863852 | orchestrator | 00:01:42.863 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-25 00:01:49.233705 | orchestrator | 00:01:49.233 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=4689469d-42cb-4ecf-b4dd-d3c9798f7343] 2025-05-25 00:01:49.252301 | orchestrator | 00:01:49.251 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-25 00:01:49.252379 | orchestrator | 00:01:49.252 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-25 00:01:49.252389 | orchestrator | 00:01:49.252 STDOUT terraform: local_file.inventory: Creating... 2025-05-25 00:01:49.257004 | orchestrator | 00:01:49.256 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=be17dcd776d8b5081461255a6e82b13ecd9845ec] 2025-05-25 00:01:49.257296 | orchestrator | 00:01:49.257 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=8bf9921e50ec133377a3c526a3861ae33eb02830] 2025-05-25 00:01:49.960281 | orchestrator | 00:01:49.959 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=4689469d-42cb-4ecf-b4dd-d3c9798f7343] 2025-05-25 00:01:52.843918 | orchestrator | 00:01:52.843 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-25 00:01:52.849961 | orchestrator | 00:01:52.849 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-25 00:01:52.857714 | orchestrator | 00:01:52.857 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-25 00:01:52.857845 | orchestrator | 00:01:52.857 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-25 00:01:52.861894 | orchestrator | 00:01:52.861 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-25 00:01:52.863991 | orchestrator | 00:01:52.863 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-25 00:02:02.847773 | orchestrator | 00:02:02.847 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-25 00:02:02.850863 | orchestrator | 00:02:02.850 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-25 00:02:02.858941 | orchestrator | 00:02:02.858 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-25 00:02:02.859052 | orchestrator | 00:02:02.858 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-25 00:02:02.862770 | orchestrator | 00:02:02.862 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-25 00:02:02.865062 | orchestrator | 00:02:02.864 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-25 00:02:03.353186 | orchestrator | 00:02:03.352 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=cf67cc64-f5fc-4c79-ada1-f2a7b74eb149] 2025-05-25 00:02:03.468387 | orchestrator | 00:02:03.468 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=74d1e81b-672c-4829-9554-42e9e44fa06a] 2025-05-25 00:02:03.488129 | orchestrator | 00:02:03.487 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=f5418efb-0798-4a04-9b8d-69f24b1d1af3] 2025-05-25 00:02:12.851454 | orchestrator | 00:02:12.851 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-05-25 00:02:12.859810 | orchestrator | 00:02:12.859 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-25 00:02:12.859926 | orchestrator | 00:02:12.859 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-05-25 00:02:13.520759 | orchestrator | 00:02:13.520 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=866b58dd-1348-4ca9-b4ed-939529fba40e] 2025-05-25 00:02:13.765291 | orchestrator | 00:02:13.764 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=dd746a5f-5b83-4980-b804-6d11f66b4499] 2025-05-25 00:02:13.961720 | orchestrator | 00:02:13.961 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=aa8cf8e0-bca6-48f0-bd0b-6f83b182b187] 2025-05-25 00:02:13.991398 | orchestrator | 00:02:13.988 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-25 00:02:13.992228 | orchestrator | 00:02:13.992 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-25 00:02:13.995593 | orchestrator | 00:02:13.995 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-25 00:02:13.995658 | orchestrator | 00:02:13.995 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2621996491445199369] 2025-05-25 00:02:13.995802 | orchestrator | 00:02:13.995 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-25 00:02:13.997996 | orchestrator | 00:02:13.997 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-25 00:02:14.004211 | orchestrator | 00:02:14.003 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-25 00:02:14.010107 | orchestrator | 00:02:14.008 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-25 00:02:14.015686 | orchestrator | 00:02:14.015 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-25 00:02:14.017462 | orchestrator | 00:02:14.017 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-25 00:02:14.023285 | orchestrator | 00:02:14.023 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-25 00:02:14.030508 | orchestrator | 00:02:14.030 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-25 00:02:19.297530 | orchestrator | 00:02:19.296 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=74d1e81b-672c-4829-9554-42e9e44fa06a/f90c35ea-44f5-4677-8ded-e7e6ddf8d55d] 2025-05-25 00:02:19.309742 | orchestrator | 00:02:19.309 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=dd746a5f-5b83-4980-b804-6d11f66b4499/70c7a39a-01cf-4431-b65e-7bc8a8e29825] 2025-05-25 00:02:19.330349 | orchestrator | 00:02:19.329 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=dd746a5f-5b83-4980-b804-6d11f66b4499/a4234bd8-7c33-4d3a-bb78-5919196abab5] 2025-05-25 00:02:19.348827 | orchestrator | 00:02:19.348 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=74d1e81b-672c-4829-9554-42e9e44fa06a/5d6b2858-a2bf-4730-a36e-7c509d6038b8] 2025-05-25 00:02:19.376567 | orchestrator | 00:02:19.375 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=dd746a5f-5b83-4980-b804-6d11f66b4499/5104b556-d7c3-42e9-9230-39ae2abd74e9] 2025-05-25 00:02:19.381969 | orchestrator | 00:02:19.381 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=866b58dd-1348-4ca9-b4ed-939529fba40e/00903628-efdf-425a-bac1-d89af04936e9] 2025-05-25 00:02:19.392855 | orchestrator | 00:02:19.392 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=74d1e81b-672c-4829-9554-42e9e44fa06a/b4cdb2bf-93fc-4f18-bc4f-5ab68c384bd6] 2025-05-25 00:02:19.405922 | orchestrator | 00:02:19.405 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=866b58dd-1348-4ca9-b4ed-939529fba40e/45989edd-037d-47c1-af48-ae55f96e814d] 2025-05-25 00:02:19.466599 | orchestrator | 00:02:19.465 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=866b58dd-1348-4ca9-b4ed-939529fba40e/a7a2bb5e-544e-42c6-9dad-0ece7cbc632c] 2025-05-25 00:02:24.028152 | orchestrator | 00:02:24.027 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-25 00:02:34.029040 | orchestrator | 00:02:34.028 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-25 00:02:34.430090 | orchestrator | 00:02:34.429 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=0523518f-8f94-4cb0-9d16-b696d92d97a2] 2025-05-25 00:02:34.448285 | orchestrator | 00:02:34.447 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-25 00:02:34.448354 | orchestrator | 00:02:34.448 STDOUT terraform: Outputs: 2025-05-25 00:02:34.448362 | orchestrator | 00:02:34.448 STDOUT terraform: manager_address = 2025-05-25 00:02:34.448368 | orchestrator | 00:02:34.448 STDOUT terraform: private_key = 2025-05-25 00:02:34.954036 | orchestrator | ok: Runtime: 0:01:35.607175 2025-05-25 00:02:35.006234 | 2025-05-25 00:02:35.006373 | TASK [Fetch manager address] 2025-05-25 00:02:35.432596 | orchestrator | ok 2025-05-25 00:02:35.445041 | 2025-05-25 00:02:35.445182 | TASK [Set manager_host address] 2025-05-25 00:02:35.524399 | orchestrator | ok 2025-05-25 00:02:35.534454 | 2025-05-25 00:02:35.534597 | LOOP [Update ansible collections] 2025-05-25 00:02:36.309840 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-25 00:02:36.310143 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-25 00:02:36.310193 | orchestrator | Starting galaxy collection install process 2025-05-25 00:02:36.310228 | orchestrator | Process install dependency map 2025-05-25 00:02:36.310260 | orchestrator | Starting collection install process 2025-05-25 00:02:36.310289 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2025-05-25 00:02:36.310322 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2025-05-25 00:02:36.310368 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-25 00:02:36.310442 | orchestrator | ok: Item: commons Runtime: 0:00:00.466526 2025-05-25 00:02:37.061083 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-25 00:02:37.061844 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-25 00:02:37.062390 | orchestrator | Starting galaxy collection install process 2025-05-25 00:02:37.063143 | orchestrator | Process install dependency map 2025-05-25 00:02:37.063203 | orchestrator | Starting collection install process 2025-05-25 00:02:37.063233 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2025-05-25 00:02:37.063261 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2025-05-25 00:02:37.063286 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-25 00:02:37.063328 | orchestrator | ok: Item: services Runtime: 0:00:00.505746 2025-05-25 00:02:37.089426 | 2025-05-25 00:02:37.089622 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-25 00:02:47.629941 | orchestrator | ok 2025-05-25 00:02:47.641043 | 2025-05-25 00:02:47.641231 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-25 00:03:47.688758 | orchestrator | ok 2025-05-25 00:03:47.698185 | 2025-05-25 00:03:47.698295 | TASK [Fetch manager ssh hostkey] 2025-05-25 00:03:49.267995 | orchestrator | Output suppressed because no_log was given 2025-05-25 00:03:49.285305 | 2025-05-25 00:03:49.285503 | TASK [Get ssh keypair from terraform environment] 2025-05-25 00:03:49.828656 | orchestrator | ok: Runtime: 0:00:00.009482 2025-05-25 00:03:49.851180 | 2025-05-25 00:03:49.851347 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-25 00:03:49.894804 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-25 00:03:49.906424 | 2025-05-25 00:03:49.906582 | TASK [Run manager part 0] 2025-05-25 00:03:50.805539 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-25 00:03:50.847821 | orchestrator | 2025-05-25 00:03:50.847868 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-25 00:03:50.847875 | orchestrator | 2025-05-25 00:03:50.847888 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-25 00:03:52.794812 | orchestrator | ok: [testbed-manager] 2025-05-25 00:03:52.794866 | orchestrator | 2025-05-25 00:03:52.794889 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-25 00:03:52.794899 | orchestrator | 2025-05-25 00:03:52.794909 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 00:03:54.679309 | orchestrator | ok: [testbed-manager] 2025-05-25 00:03:54.679375 | orchestrator | 2025-05-25 00:03:54.679382 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-25 00:03:55.385487 | orchestrator | ok: [testbed-manager] 2025-05-25 00:03:55.385633 | orchestrator | 2025-05-25 00:03:55.385648 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-25 00:03:55.436915 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:03:55.437006 | orchestrator | 2025-05-25 00:03:55.437034 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-25 00:03:55.481001 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:03:55.481073 | orchestrator | 2025-05-25 00:03:55.481091 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-25 00:03:55.523024 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:03:55.523103 | orchestrator | 2025-05-25 00:03:55.523120 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-25 00:03:55.555897 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:03:55.555941 | orchestrator | 2025-05-25 00:03:55.555946 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-25 00:03:55.582766 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:03:55.582812 | orchestrator | 2025-05-25 00:03:55.582822 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-25 00:03:55.617732 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:03:55.617779 | orchestrator | 2025-05-25 00:03:55.617787 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-25 00:03:55.654587 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:03:55.654666 | orchestrator | 2025-05-25 00:03:55.654684 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-25 00:03:56.475915 | orchestrator | changed: [testbed-manager] 2025-05-25 00:03:56.475999 | orchestrator | 2025-05-25 00:03:56.476012 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-25 00:06:59.966770 | orchestrator | changed: [testbed-manager] 2025-05-25 00:06:59.966868 | orchestrator | 2025-05-25 00:06:59.966886 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-25 00:08:25.377446 | orchestrator | changed: [testbed-manager] 2025-05-25 00:08:25.377547 | orchestrator | 2025-05-25 00:08:25.377564 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-25 00:08:48.084727 | orchestrator | changed: [testbed-manager] 2025-05-25 00:08:48.084775 | orchestrator | 2025-05-25 00:08:48.084784 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-25 00:08:56.572237 | orchestrator | changed: [testbed-manager] 2025-05-25 00:08:56.572338 | orchestrator | 2025-05-25 00:08:56.572354 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-25 00:08:56.621181 | orchestrator | ok: [testbed-manager] 2025-05-25 00:08:56.621269 | orchestrator | 2025-05-25 00:08:56.621284 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-25 00:08:57.391742 | orchestrator | ok: [testbed-manager] 2025-05-25 00:08:57.391826 | orchestrator | 2025-05-25 00:08:57.391844 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-25 00:08:58.102681 | orchestrator | changed: [testbed-manager] 2025-05-25 00:08:58.102771 | orchestrator | 2025-05-25 00:08:58.102787 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-25 00:09:05.266727 | orchestrator | changed: [testbed-manager] 2025-05-25 00:09:05.266788 | orchestrator | 2025-05-25 00:09:05.266812 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-25 00:09:11.442085 | orchestrator | changed: [testbed-manager] 2025-05-25 00:09:11.442172 | orchestrator | 2025-05-25 00:09:11.442189 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-25 00:09:14.056209 | orchestrator | changed: [testbed-manager] 2025-05-25 00:09:14.056302 | orchestrator | 2025-05-25 00:09:14.056319 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-25 00:09:15.810831 | orchestrator | changed: [testbed-manager] 2025-05-25 00:09:15.810917 | orchestrator | 2025-05-25 00:09:15.810933 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-25 00:09:16.925919 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-25 00:09:16.925977 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-25 00:09:16.925988 | orchestrator | 2025-05-25 00:09:16.925998 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-25 00:09:16.966234 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-25 00:09:16.966328 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-25 00:09:16.966351 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-25 00:09:16.966480 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-25 00:09:20.185053 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-25 00:09:20.185170 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-25 00:09:20.185186 | orchestrator | 2025-05-25 00:09:20.185199 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-25 00:09:20.763156 | orchestrator | changed: [testbed-manager] 2025-05-25 00:09:20.763240 | orchestrator | 2025-05-25 00:09:20.763256 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-25 00:12:38.820576 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-25 00:12:38.820676 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-25 00:12:38.820693 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-25 00:12:38.820706 | orchestrator | 2025-05-25 00:12:38.820718 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-25 00:12:41.141801 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-25 00:12:41.141834 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-25 00:12:41.141839 | orchestrator | 2025-05-25 00:12:41.141844 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-25 00:12:41.141849 | orchestrator | 2025-05-25 00:12:41.141854 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 00:12:42.560080 | orchestrator | ok: [testbed-manager] 2025-05-25 00:12:42.560117 | orchestrator | 2025-05-25 00:12:42.560126 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-25 00:12:42.609423 | orchestrator | ok: [testbed-manager] 2025-05-25 00:12:42.609466 | orchestrator | 2025-05-25 00:12:42.609474 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-25 00:12:42.677110 | orchestrator | ok: [testbed-manager] 2025-05-25 00:12:42.677152 | orchestrator | 2025-05-25 00:12:42.677160 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-25 00:12:43.459127 | orchestrator | changed: [testbed-manager] 2025-05-25 00:12:43.459168 | orchestrator | 2025-05-25 00:12:43.459176 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-25 00:12:44.169436 | orchestrator | changed: [testbed-manager] 2025-05-25 00:12:44.169527 | orchestrator | 2025-05-25 00:12:44.169543 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-25 00:12:45.551425 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-25 00:12:45.551512 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-25 00:12:45.551528 | orchestrator | 2025-05-25 00:12:45.551555 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-25 00:12:46.905062 | orchestrator | changed: [testbed-manager] 2025-05-25 00:12:46.905173 | orchestrator | 2025-05-25 00:12:46.905192 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-25 00:12:48.623221 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 00:12:48.623308 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-25 00:12:48.623323 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-25 00:12:48.623334 | orchestrator | 2025-05-25 00:12:48.623346 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-25 00:12:49.186705 | orchestrator | changed: [testbed-manager] 2025-05-25 00:12:49.186799 | orchestrator | 2025-05-25 00:12:49.186816 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-25 00:12:49.255634 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:12:49.255710 | orchestrator | 2025-05-25 00:12:49.255724 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-25 00:12:50.106105 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 00:12:50.106195 | orchestrator | changed: [testbed-manager] 2025-05-25 00:12:50.106211 | orchestrator | 2025-05-25 00:12:50.106224 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-25 00:12:50.134400 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:12:50.134482 | orchestrator | 2025-05-25 00:12:50.134496 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-25 00:12:50.173821 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:12:50.173894 | orchestrator | 2025-05-25 00:12:50.173908 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-25 00:12:50.214726 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:12:50.214830 | orchestrator | 2025-05-25 00:12:50.214852 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-25 00:12:50.266425 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:12:50.266530 | orchestrator | 2025-05-25 00:12:50.266555 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-25 00:12:50.975050 | orchestrator | ok: [testbed-manager] 2025-05-25 00:12:50.975143 | orchestrator | 2025-05-25 00:12:50.975159 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-25 00:12:50.975172 | orchestrator | 2025-05-25 00:12:50.975186 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 00:12:52.392077 | orchestrator | ok: [testbed-manager] 2025-05-25 00:12:52.392777 | orchestrator | 2025-05-25 00:12:52.392799 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-25 00:12:53.368901 | orchestrator | changed: [testbed-manager] 2025-05-25 00:12:53.369021 | orchestrator | 2025-05-25 00:12:53.369038 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:12:53.369052 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-25 00:12:53.369064 | orchestrator | 2025-05-25 00:12:53.780297 | orchestrator | ok: Runtime: 0:09:03.248980 2025-05-25 00:12:53.800989 | 2025-05-25 00:12:53.801297 | TASK [Point out that the log in on the manager is now possible] 2025-05-25 00:12:53.851688 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-25 00:12:53.861789 | 2025-05-25 00:12:53.861916 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-25 00:12:53.895418 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-25 00:12:53.903921 | 2025-05-25 00:12:53.904101 | TASK [Run manager part 1 + 2] 2025-05-25 00:12:54.721037 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-25 00:12:54.774334 | orchestrator | 2025-05-25 00:12:54.774381 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-25 00:12:54.774388 | orchestrator | 2025-05-25 00:12:54.774400 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 00:12:58.798120 | orchestrator | ok: [testbed-manager] 2025-05-25 00:12:58.798171 | orchestrator | 2025-05-25 00:12:58.798195 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-25 00:12:58.835275 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:12:58.835325 | orchestrator | 2025-05-25 00:12:58.835336 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-25 00:12:58.880568 | orchestrator | ok: [testbed-manager] 2025-05-25 00:12:58.880614 | orchestrator | 2025-05-25 00:12:58.880625 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-25 00:12:58.920863 | orchestrator | ok: [testbed-manager] 2025-05-25 00:12:58.920912 | orchestrator | 2025-05-25 00:12:58.920923 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-25 00:12:58.985464 | orchestrator | ok: [testbed-manager] 2025-05-25 00:12:58.985519 | orchestrator | 2025-05-25 00:12:58.985531 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-25 00:12:59.047684 | orchestrator | ok: [testbed-manager] 2025-05-25 00:12:59.047736 | orchestrator | 2025-05-25 00:12:59.047746 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-25 00:12:59.091923 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-25 00:12:59.091984 | orchestrator | 2025-05-25 00:12:59.091991 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-25 00:12:59.923382 | orchestrator | ok: [testbed-manager] 2025-05-25 00:12:59.923444 | orchestrator | 2025-05-25 00:12:59.923457 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-25 00:12:59.964808 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:12:59.964855 | orchestrator | 2025-05-25 00:12:59.964863 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-25 00:13:01.320659 | orchestrator | changed: [testbed-manager] 2025-05-25 00:13:01.320716 | orchestrator | 2025-05-25 00:13:01.320728 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-25 00:13:01.875493 | orchestrator | ok: [testbed-manager] 2025-05-25 00:13:01.875551 | orchestrator | 2025-05-25 00:13:01.875561 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-25 00:13:03.029380 | orchestrator | changed: [testbed-manager] 2025-05-25 00:13:03.029461 | orchestrator | 2025-05-25 00:13:03.029487 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-25 00:13:16.338921 | orchestrator | changed: [testbed-manager] 2025-05-25 00:13:16.339038 | orchestrator | 2025-05-25 00:13:16.339055 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-25 00:13:17.022324 | orchestrator | ok: [testbed-manager] 2025-05-25 00:13:17.022420 | orchestrator | 2025-05-25 00:13:17.022440 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-25 00:13:17.069815 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:13:17.069896 | orchestrator | 2025-05-25 00:13:17.069916 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-25 00:13:18.026404 | orchestrator | changed: [testbed-manager] 2025-05-25 00:13:18.026491 | orchestrator | 2025-05-25 00:13:18.026508 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-25 00:13:18.977945 | orchestrator | changed: [testbed-manager] 2025-05-25 00:13:18.978096 | orchestrator | 2025-05-25 00:13:18.978114 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-25 00:13:19.524263 | orchestrator | changed: [testbed-manager] 2025-05-25 00:13:19.524302 | orchestrator | 2025-05-25 00:13:19.524311 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-25 00:13:19.563834 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-25 00:13:19.563931 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-25 00:13:19.563945 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-25 00:13:19.563980 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-25 00:13:21.460288 | orchestrator | changed: [testbed-manager] 2025-05-25 00:13:21.460335 | orchestrator | 2025-05-25 00:13:21.460344 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-25 00:13:30.409407 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-25 00:13:30.409512 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-25 00:13:30.409531 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-25 00:13:30.409544 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-25 00:13:30.409565 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-25 00:13:30.409576 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-25 00:13:30.409587 | orchestrator | 2025-05-25 00:13:30.409599 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-25 00:13:31.455553 | orchestrator | changed: [testbed-manager] 2025-05-25 00:13:31.455642 | orchestrator | 2025-05-25 00:13:31.455657 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-25 00:13:31.499747 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:13:31.499820 | orchestrator | 2025-05-25 00:13:31.499834 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-25 00:13:34.609110 | orchestrator | changed: [testbed-manager] 2025-05-25 00:13:34.609209 | orchestrator | 2025-05-25 00:13:34.609226 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-25 00:13:34.648276 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:13:34.648363 | orchestrator | 2025-05-25 00:13:34.648379 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-25 00:15:12.863608 | orchestrator | changed: [testbed-manager] 2025-05-25 00:15:12.863705 | orchestrator | 2025-05-25 00:15:12.863724 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-25 00:15:13.992540 | orchestrator | ok: [testbed-manager] 2025-05-25 00:15:13.992626 | orchestrator | 2025-05-25 00:15:13.992643 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:15:13.992658 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-25 00:15:13.992670 | orchestrator | 2025-05-25 00:15:14.147268 | orchestrator | ok: Runtime: 0:02:19.880556 2025-05-25 00:15:14.158076 | 2025-05-25 00:15:14.158201 | TASK [Reboot manager] 2025-05-25 00:15:15.692207 | orchestrator | ok: Runtime: 0:00:00.958520 2025-05-25 00:15:15.709407 | 2025-05-25 00:15:15.709566 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-25 00:15:30.156872 | orchestrator | ok 2025-05-25 00:15:30.168954 | 2025-05-25 00:15:30.169186 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-25 00:16:30.224417 | orchestrator | ok 2025-05-25 00:16:30.233717 | 2025-05-25 00:16:30.233842 | TASK [Deploy manager + bootstrap nodes] 2025-05-25 00:16:32.618147 | orchestrator | 2025-05-25 00:16:32.618392 | orchestrator | # DEPLOY MANAGER 2025-05-25 00:16:32.618417 | orchestrator | 2025-05-25 00:16:32.618432 | orchestrator | + set -e 2025-05-25 00:16:32.618445 | orchestrator | + echo 2025-05-25 00:16:32.618459 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-25 00:16:32.618476 | orchestrator | + echo 2025-05-25 00:16:32.618525 | orchestrator | + cat /opt/manager-vars.sh 2025-05-25 00:16:32.621164 | orchestrator | export NUMBER_OF_NODES=6 2025-05-25 00:16:32.621192 | orchestrator | 2025-05-25 00:16:32.621204 | orchestrator | export CEPH_VERSION=reef 2025-05-25 00:16:32.621217 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-25 00:16:32.621229 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-25 00:16:32.621252 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-25 00:16:32.621262 | orchestrator | 2025-05-25 00:16:32.621281 | orchestrator | export ARA=false 2025-05-25 00:16:32.621293 | orchestrator | export TEMPEST=false 2025-05-25 00:16:32.621310 | orchestrator | export IS_ZUUL=true 2025-05-25 00:16:32.621321 | orchestrator | 2025-05-25 00:16:32.621339 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.93 2025-05-25 00:16:32.621351 | orchestrator | export EXTERNAL_API=false 2025-05-25 00:16:32.621362 | orchestrator | 2025-05-25 00:16:32.621383 | orchestrator | export IMAGE_USER=ubuntu 2025-05-25 00:16:32.621394 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-25 00:16:32.621405 | orchestrator | 2025-05-25 00:16:32.621418 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-25 00:16:32.621435 | orchestrator | 2025-05-25 00:16:32.621446 | orchestrator | + echo 2025-05-25 00:16:32.621457 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-25 00:16:32.622195 | orchestrator | ++ export INTERACTIVE=false 2025-05-25 00:16:32.622217 | orchestrator | ++ INTERACTIVE=false 2025-05-25 00:16:32.622229 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-25 00:16:32.622241 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-25 00:16:32.622476 | orchestrator | + source /opt/manager-vars.sh 2025-05-25 00:16:32.622492 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-25 00:16:32.622505 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-25 00:16:32.622516 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-25 00:16:32.622527 | orchestrator | ++ CEPH_VERSION=reef 2025-05-25 00:16:32.622636 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-25 00:16:32.622650 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-25 00:16:32.622661 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-25 00:16:32.622672 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-25 00:16:32.622683 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-25 00:16:32.622693 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-25 00:16:32.622704 | orchestrator | ++ export ARA=false 2025-05-25 00:16:32.622715 | orchestrator | ++ ARA=false 2025-05-25 00:16:32.622738 | orchestrator | ++ export TEMPEST=false 2025-05-25 00:16:32.622749 | orchestrator | ++ TEMPEST=false 2025-05-25 00:16:32.622760 | orchestrator | ++ export IS_ZUUL=true 2025-05-25 00:16:32.622771 | orchestrator | ++ IS_ZUUL=true 2025-05-25 00:16:32.622785 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.93 2025-05-25 00:16:32.622796 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.93 2025-05-25 00:16:32.622888 | orchestrator | ++ export EXTERNAL_API=false 2025-05-25 00:16:32.622903 | orchestrator | ++ EXTERNAL_API=false 2025-05-25 00:16:32.622913 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-25 00:16:32.622924 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-25 00:16:32.622938 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-25 00:16:32.622950 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-25 00:16:32.622961 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-25 00:16:32.622972 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-25 00:16:32.623109 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-25 00:16:32.672639 | orchestrator | + docker version 2025-05-25 00:16:32.909321 | orchestrator | Client: Docker Engine - Community 2025-05-25 00:16:32.909429 | orchestrator | Version: 26.1.4 2025-05-25 00:16:32.909448 | orchestrator | API version: 1.45 2025-05-25 00:16:32.909460 | orchestrator | Go version: go1.21.11 2025-05-25 00:16:32.909471 | orchestrator | Git commit: 5650f9b 2025-05-25 00:16:32.909483 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-25 00:16:32.909495 | orchestrator | OS/Arch: linux/amd64 2025-05-25 00:16:32.909507 | orchestrator | Context: default 2025-05-25 00:16:32.909518 | orchestrator | 2025-05-25 00:16:32.909529 | orchestrator | Server: Docker Engine - Community 2025-05-25 00:16:32.909541 | orchestrator | Engine: 2025-05-25 00:16:32.909552 | orchestrator | Version: 26.1.4 2025-05-25 00:16:32.909563 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-25 00:16:32.909573 | orchestrator | Go version: go1.21.11 2025-05-25 00:16:32.909584 | orchestrator | Git commit: de5c9cf 2025-05-25 00:16:32.909628 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-25 00:16:32.909639 | orchestrator | OS/Arch: linux/amd64 2025-05-25 00:16:32.909650 | orchestrator | Experimental: false 2025-05-25 00:16:32.909661 | orchestrator | containerd: 2025-05-25 00:16:32.909672 | orchestrator | Version: 1.7.27 2025-05-25 00:16:32.909683 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-25 00:16:32.909694 | orchestrator | runc: 2025-05-25 00:16:32.909705 | orchestrator | Version: 1.2.5 2025-05-25 00:16:32.909716 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-25 00:16:32.909726 | orchestrator | docker-init: 2025-05-25 00:16:32.909737 | orchestrator | Version: 0.19.0 2025-05-25 00:16:32.909748 | orchestrator | GitCommit: de40ad0 2025-05-25 00:16:32.912237 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-25 00:16:32.921780 | orchestrator | + set -e 2025-05-25 00:16:32.921892 | orchestrator | + source /opt/manager-vars.sh 2025-05-25 00:16:32.921909 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-25 00:16:32.921921 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-25 00:16:32.921932 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-25 00:16:32.921942 | orchestrator | ++ CEPH_VERSION=reef 2025-05-25 00:16:32.921953 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-25 00:16:32.921967 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-25 00:16:32.921978 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-25 00:16:32.921989 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-25 00:16:32.922000 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-25 00:16:32.922010 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-25 00:16:32.922067 | orchestrator | ++ export ARA=false 2025-05-25 00:16:32.922078 | orchestrator | ++ ARA=false 2025-05-25 00:16:32.922089 | orchestrator | ++ export TEMPEST=false 2025-05-25 00:16:32.922113 | orchestrator | ++ TEMPEST=false 2025-05-25 00:16:32.922124 | orchestrator | ++ export IS_ZUUL=true 2025-05-25 00:16:32.922135 | orchestrator | ++ IS_ZUUL=true 2025-05-25 00:16:32.922146 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.93 2025-05-25 00:16:32.922157 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.93 2025-05-25 00:16:32.922168 | orchestrator | ++ export EXTERNAL_API=false 2025-05-25 00:16:32.922183 | orchestrator | ++ EXTERNAL_API=false 2025-05-25 00:16:32.922194 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-25 00:16:32.922204 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-25 00:16:32.922215 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-25 00:16:32.922226 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-25 00:16:32.922236 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-25 00:16:32.922247 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-25 00:16:32.922261 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-25 00:16:32.922272 | orchestrator | ++ export INTERACTIVE=false 2025-05-25 00:16:32.922283 | orchestrator | ++ INTERACTIVE=false 2025-05-25 00:16:32.922293 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-25 00:16:32.922305 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-25 00:16:32.922500 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-25 00:16:32.922518 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-25 00:16:32.926888 | orchestrator | + set -e 2025-05-25 00:16:32.926938 | orchestrator | + VERSION=8.1.0 2025-05-25 00:16:32.926958 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-25 00:16:32.934212 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-25 00:16:32.934260 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-25 00:16:32.938920 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-25 00:16:32.943443 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-25 00:16:32.951256 | orchestrator | /opt/configuration ~ 2025-05-25 00:16:32.951309 | orchestrator | + set -e 2025-05-25 00:16:32.951321 | orchestrator | + pushd /opt/configuration 2025-05-25 00:16:32.951332 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-25 00:16:32.952332 | orchestrator | + source /opt/venv/bin/activate 2025-05-25 00:16:32.953810 | orchestrator | ++ deactivate nondestructive 2025-05-25 00:16:32.953828 | orchestrator | ++ '[' -n '' ']' 2025-05-25 00:16:32.953873 | orchestrator | ++ '[' -n '' ']' 2025-05-25 00:16:32.953885 | orchestrator | ++ hash -r 2025-05-25 00:16:32.953896 | orchestrator | ++ '[' -n '' ']' 2025-05-25 00:16:32.953907 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-25 00:16:32.953917 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-25 00:16:32.953928 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-25 00:16:32.953939 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-25 00:16:32.953971 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-25 00:16:32.953983 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-25 00:16:32.953994 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-25 00:16:32.954006 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-25 00:16:32.954061 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-25 00:16:32.954076 | orchestrator | ++ export PATH 2025-05-25 00:16:32.954086 | orchestrator | ++ '[' -n '' ']' 2025-05-25 00:16:32.954102 | orchestrator | ++ '[' -z '' ']' 2025-05-25 00:16:32.954113 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-25 00:16:32.954124 | orchestrator | ++ PS1='(venv) ' 2025-05-25 00:16:32.954134 | orchestrator | ++ export PS1 2025-05-25 00:16:32.954145 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-25 00:16:32.954156 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-25 00:16:32.954166 | orchestrator | ++ hash -r 2025-05-25 00:16:32.954189 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-25 00:16:34.004546 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-25 00:16:34.005209 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-25 00:16:34.006579 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-25 00:16:34.007816 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-25 00:16:34.008935 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-25 00:16:34.019004 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-05-25 00:16:34.020264 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-25 00:16:34.021311 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-25 00:16:34.022555 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-25 00:16:34.053120 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-25 00:16:34.054376 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-25 00:16:34.056047 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-25 00:16:34.057663 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-25 00:16:34.061541 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-25 00:16:34.265436 | orchestrator | ++ which gilt 2025-05-25 00:16:34.269149 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-25 00:16:34.269236 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-25 00:16:34.473575 | orchestrator | osism.cfg-generics: 2025-05-25 00:16:34.473693 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-25 00:16:35.972305 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-25 00:16:35.972414 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-25 00:16:35.972728 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-25 00:16:35.972757 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-25 00:16:36.877108 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-25 00:16:36.888088 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-25 00:16:37.353472 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-25 00:16:37.413149 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-25 00:16:37.413228 | orchestrator | + deactivate 2025-05-25 00:16:37.413242 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-25 00:16:37.413256 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-25 00:16:37.413266 | orchestrator | + export PATH 2025-05-25 00:16:37.413278 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-25 00:16:37.413289 | orchestrator | + '[' -n '' ']' 2025-05-25 00:16:37.413299 | orchestrator | + hash -r 2025-05-25 00:16:37.413310 | orchestrator | + '[' -n '' ']' 2025-05-25 00:16:37.413321 | orchestrator | + unset VIRTUAL_ENV 2025-05-25 00:16:37.413331 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-25 00:16:37.413342 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-25 00:16:37.413353 | orchestrator | + unset -f deactivate 2025-05-25 00:16:37.413364 | orchestrator | + popd 2025-05-25 00:16:37.413375 | orchestrator | ~ 2025-05-25 00:16:37.415127 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-25 00:16:37.415153 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-25 00:16:37.415992 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-25 00:16:37.477738 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-25 00:16:37.477813 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-25 00:16:37.477824 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-25 00:16:37.518796 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-25 00:16:37.518944 | orchestrator | + source /opt/venv/bin/activate 2025-05-25 00:16:37.518973 | orchestrator | ++ deactivate nondestructive 2025-05-25 00:16:37.518993 | orchestrator | ++ '[' -n '' ']' 2025-05-25 00:16:37.519010 | orchestrator | ++ '[' -n '' ']' 2025-05-25 00:16:37.519029 | orchestrator | ++ hash -r 2025-05-25 00:16:37.519048 | orchestrator | ++ '[' -n '' ']' 2025-05-25 00:16:37.519349 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-25 00:16:37.519373 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-25 00:16:37.519391 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-25 00:16:37.519409 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-25 00:16:37.519425 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-25 00:16:37.519441 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-25 00:16:37.519454 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-25 00:16:37.519468 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-25 00:16:37.519483 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-25 00:16:37.519497 | orchestrator | ++ export PATH 2025-05-25 00:16:37.519510 | orchestrator | ++ '[' -n '' ']' 2025-05-25 00:16:37.519523 | orchestrator | ++ '[' -z '' ']' 2025-05-25 00:16:37.519537 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-25 00:16:37.519551 | orchestrator | ++ PS1='(venv) ' 2025-05-25 00:16:37.519565 | orchestrator | ++ export PS1 2025-05-25 00:16:37.519578 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-25 00:16:37.519592 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-25 00:16:37.519605 | orchestrator | ++ hash -r 2025-05-25 00:16:37.519620 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-25 00:16:38.660594 | orchestrator | 2025-05-25 00:16:38.660723 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-25 00:16:38.660741 | orchestrator | 2025-05-25 00:16:38.660754 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-25 00:16:39.218887 | orchestrator | ok: [testbed-manager] 2025-05-25 00:16:39.218988 | orchestrator | 2025-05-25 00:16:39.219003 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-25 00:16:40.196048 | orchestrator | changed: [testbed-manager] 2025-05-25 00:16:40.196151 | orchestrator | 2025-05-25 00:16:40.196167 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-25 00:16:40.196180 | orchestrator | 2025-05-25 00:16:40.196192 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 00:16:42.531237 | orchestrator | ok: [testbed-manager] 2025-05-25 00:16:42.531335 | orchestrator | 2025-05-25 00:16:42.531347 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-25 00:16:47.405775 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-25 00:16:47.405932 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.6.2) 2025-05-25 00:16:47.405948 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-25 00:16:47.405958 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-25 00:16:47.405981 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-25 00:16:47.406063 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.1-alpine) 2025-05-25 00:16:47.406078 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-25 00:16:47.406089 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-25 00:16:47.406099 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-25 00:16:47.406108 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.6-alpine) 2025-05-25 00:16:47.406117 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.2.1) 2025-05-25 00:16:47.406126 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.18.2) 2025-05-25 00:16:47.406135 | orchestrator | 2025-05-25 00:16:47.406144 | orchestrator | TASK [Check status] ************************************************************ 2025-05-25 00:18:03.363387 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-25 00:18:03.363507 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-25 00:18:03.363524 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-25 00:18:03.363536 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-25 00:18:03.363560 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j101119498339.1588', 'results_file': '/home/dragon/.ansible_async/j101119498339.1588', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-25 00:18:03.363581 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j929112414211.1613', 'results_file': '/home/dragon/.ansible_async/j929112414211.1613', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-25 00:18:03.363597 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-25 00:18:03.363608 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-25 00:18:03.363619 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j973137528723.1638', 'results_file': '/home/dragon/.ansible_async/j973137528723.1638', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-25 00:18:03.363630 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j760309960102.1670', 'results_file': '/home/dragon/.ansible_async/j760309960102.1670', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-25 00:18:03.363642 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j786546450438.1701', 'results_file': '/home/dragon/.ansible_async/j786546450438.1701', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-25 00:18:03.363654 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j872510115249.1733', 'results_file': '/home/dragon/.ansible_async/j872510115249.1733', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-25 00:18:03.363665 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-25 00:18:03.363707 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j152932104346.1766', 'results_file': '/home/dragon/.ansible_async/j152932104346.1766', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-25 00:18:03.363723 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j31393272621.1798', 'results_file': '/home/dragon/.ansible_async/j31393272621.1798', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-25 00:18:03.363734 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j314494857393.1831', 'results_file': '/home/dragon/.ansible_async/j314494857393.1831', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-25 00:18:03.363745 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j944819635822.1865', 'results_file': '/home/dragon/.ansible_async/j944819635822.1865', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-25 00:18:03.363756 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j853375328019.1904', 'results_file': '/home/dragon/.ansible_async/j853375328019.1904', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-25 00:18:03.363767 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j653419610673.1938', 'results_file': '/home/dragon/.ansible_async/j653419610673.1938', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-25 00:18:03.363779 | orchestrator | 2025-05-25 00:18:03.363854 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-25 00:18:03.423535 | orchestrator | ok: [testbed-manager] 2025-05-25 00:18:03.423629 | orchestrator | 2025-05-25 00:18:03.423645 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-25 00:18:03.901056 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:03.901164 | orchestrator | 2025-05-25 00:18:03.901179 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-25 00:18:04.242584 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:04.242717 | orchestrator | 2025-05-25 00:18:04.242734 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-25 00:18:04.577580 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:04.577705 | orchestrator | 2025-05-25 00:18:04.577722 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-25 00:18:04.633557 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:18:04.633662 | orchestrator | 2025-05-25 00:18:04.633682 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-25 00:18:04.985716 | orchestrator | ok: [testbed-manager] 2025-05-25 00:18:04.985867 | orchestrator | 2025-05-25 00:18:04.985886 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-25 00:18:05.108495 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:18:05.108586 | orchestrator | 2025-05-25 00:18:05.108600 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-25 00:18:05.108612 | orchestrator | 2025-05-25 00:18:05.108623 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 00:18:06.954462 | orchestrator | ok: [testbed-manager] 2025-05-25 00:18:06.954597 | orchestrator | 2025-05-25 00:18:06.954624 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-25 00:18:07.057534 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-25 00:18:07.057632 | orchestrator | 2025-05-25 00:18:07.057646 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-25 00:18:07.130112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-25 00:18:07.130267 | orchestrator | 2025-05-25 00:18:07.130285 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-25 00:18:08.230334 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-25 00:18:08.230446 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-25 00:18:08.230462 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-25 00:18:08.230474 | orchestrator | 2025-05-25 00:18:08.230486 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-25 00:18:10.004435 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-25 00:18:10.004547 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-25 00:18:10.004562 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-25 00:18:10.004574 | orchestrator | 2025-05-25 00:18:10.004586 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-25 00:18:10.644499 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 00:18:10.644600 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:10.644614 | orchestrator | 2025-05-25 00:18:10.644650 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-25 00:18:11.269469 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 00:18:11.269572 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:11.269587 | orchestrator | 2025-05-25 00:18:11.269600 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-25 00:18:11.328641 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:18:11.328741 | orchestrator | 2025-05-25 00:18:11.328756 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-25 00:18:11.679316 | orchestrator | ok: [testbed-manager] 2025-05-25 00:18:11.679416 | orchestrator | 2025-05-25 00:18:11.679431 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-25 00:18:11.742676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-25 00:18:11.742765 | orchestrator | 2025-05-25 00:18:11.742824 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-25 00:18:12.762846 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:12.762974 | orchestrator | 2025-05-25 00:18:12.762999 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-25 00:18:13.558360 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:13.558501 | orchestrator | 2025-05-25 00:18:13.558520 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-25 00:18:16.824502 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:16.824582 | orchestrator | 2025-05-25 00:18:16.824590 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-25 00:18:16.953262 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-25 00:18:16.953365 | orchestrator | 2025-05-25 00:18:16.953381 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-25 00:18:17.023318 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-25 00:18:17.023405 | orchestrator | 2025-05-25 00:18:17.023416 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-25 00:18:19.551453 | orchestrator | ok: [testbed-manager] 2025-05-25 00:18:19.551561 | orchestrator | 2025-05-25 00:18:19.551577 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-25 00:18:19.662322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-25 00:18:19.662414 | orchestrator | 2025-05-25 00:18:19.662427 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-25 00:18:20.806257 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-25 00:18:20.806349 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-25 00:18:20.806363 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-25 00:18:20.806403 | orchestrator | 2025-05-25 00:18:20.806416 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-25 00:18:20.867699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-25 00:18:20.867849 | orchestrator | 2025-05-25 00:18:20.867873 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-25 00:18:21.503945 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-25 00:18:21.504051 | orchestrator | 2025-05-25 00:18:21.504066 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-25 00:18:22.156414 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:22.156522 | orchestrator | 2025-05-25 00:18:22.156539 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-25 00:18:22.817388 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 00:18:22.817497 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:22.817513 | orchestrator | 2025-05-25 00:18:22.817535 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-25 00:18:23.202675 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:23.202755 | orchestrator | 2025-05-25 00:18:23.202767 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-25 00:18:23.562269 | orchestrator | ok: [testbed-manager] 2025-05-25 00:18:23.562361 | orchestrator | 2025-05-25 00:18:23.562375 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-25 00:18:23.612929 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:18:23.613014 | orchestrator | 2025-05-25 00:18:23.613029 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-25 00:18:24.252192 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:24.252300 | orchestrator | 2025-05-25 00:18:24.252316 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-25 00:18:24.342466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-25 00:18:24.342557 | orchestrator | 2025-05-25 00:18:24.342570 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-25 00:18:25.130295 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-25 00:18:25.130402 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-25 00:18:25.130418 | orchestrator | 2025-05-25 00:18:25.130431 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-25 00:18:25.802124 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-25 00:18:25.802227 | orchestrator | 2025-05-25 00:18:25.802244 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-25 00:18:26.446641 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:26.446744 | orchestrator | 2025-05-25 00:18:26.446761 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-25 00:18:26.498643 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:18:26.498740 | orchestrator | 2025-05-25 00:18:26.498753 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-25 00:18:27.144312 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:27.144425 | orchestrator | 2025-05-25 00:18:27.144442 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-25 00:18:28.908927 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 00:18:28.909053 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 00:18:28.909075 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 00:18:28.909087 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:28.909099 | orchestrator | 2025-05-25 00:18:28.909111 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-25 00:18:34.916546 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-25 00:18:34.916666 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-25 00:18:34.916684 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-25 00:18:34.916696 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-25 00:18:34.916736 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-25 00:18:34.916748 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-25 00:18:34.916759 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-25 00:18:34.916858 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-25 00:18:34.916874 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-25 00:18:34.916886 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-25 00:18:34.916896 | orchestrator | 2025-05-25 00:18:34.916909 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-25 00:18:35.556173 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-25 00:18:35.556281 | orchestrator | 2025-05-25 00:18:35.556298 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-25 00:18:35.642256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-25 00:18:35.642343 | orchestrator | 2025-05-25 00:18:35.642357 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-25 00:18:36.367417 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:36.367522 | orchestrator | 2025-05-25 00:18:36.367536 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-25 00:18:36.996856 | orchestrator | ok: [testbed-manager] 2025-05-25 00:18:36.996956 | orchestrator | 2025-05-25 00:18:36.996970 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-25 00:18:37.713222 | orchestrator | changed: [testbed-manager] 2025-05-25 00:18:37.713331 | orchestrator | 2025-05-25 00:18:37.713347 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-25 00:18:40.079875 | orchestrator | ok: [testbed-manager] 2025-05-25 00:18:40.080017 | orchestrator | 2025-05-25 00:18:40.080037 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-25 00:18:41.015644 | orchestrator | ok: [testbed-manager] 2025-05-25 00:18:41.015756 | orchestrator | 2025-05-25 00:18:41.015826 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-25 00:19:03.176939 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-25 00:19:03.177073 | orchestrator | ok: [testbed-manager] 2025-05-25 00:19:03.177091 | orchestrator | 2025-05-25 00:19:03.177103 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-25 00:19:03.229263 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:19:03.229365 | orchestrator | 2025-05-25 00:19:03.229379 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-25 00:19:03.229390 | orchestrator | 2025-05-25 00:19:03.229401 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-25 00:19:03.268643 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:19:03.268674 | orchestrator | 2025-05-25 00:19:03.268685 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-25 00:19:03.325191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-25 00:19:03.325265 | orchestrator | 2025-05-25 00:19:03.325279 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-25 00:19:04.131355 | orchestrator | ok: [testbed-manager] 2025-05-25 00:19:04.131461 | orchestrator | 2025-05-25 00:19:04.131477 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-25 00:19:04.205612 | orchestrator | ok: [testbed-manager] 2025-05-25 00:19:04.205713 | orchestrator | 2025-05-25 00:19:04.205728 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-25 00:19:04.265400 | orchestrator | ok: [testbed-manager] => { 2025-05-25 00:19:04.265487 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-25 00:19:04.265501 | orchestrator | } 2025-05-25 00:19:04.265512 | orchestrator | 2025-05-25 00:19:04.265523 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-25 00:19:04.906659 | orchestrator | ok: [testbed-manager] 2025-05-25 00:19:04.906868 | orchestrator | 2025-05-25 00:19:04.906890 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-25 00:19:05.774482 | orchestrator | ok: [testbed-manager] 2025-05-25 00:19:05.774595 | orchestrator | 2025-05-25 00:19:05.774611 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-25 00:19:05.849898 | orchestrator | ok: [testbed-manager] 2025-05-25 00:19:05.849988 | orchestrator | 2025-05-25 00:19:05.850001 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-25 00:19:05.906703 | orchestrator | ok: [testbed-manager] => { 2025-05-25 00:19:05.906855 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-25 00:19:05.906876 | orchestrator | } 2025-05-25 00:19:05.906895 | orchestrator | 2025-05-25 00:19:05.906908 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-25 00:19:05.970804 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:19:05.970915 | orchestrator | 2025-05-25 00:19:05.970943 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-25 00:19:06.037639 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:19:06.037739 | orchestrator | 2025-05-25 00:19:06.037789 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-25 00:19:06.096389 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:19:06.096497 | orchestrator | 2025-05-25 00:19:06.096514 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-25 00:19:06.145385 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:19:06.145473 | orchestrator | 2025-05-25 00:19:06.145495 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-25 00:19:06.194248 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:19:06.194326 | orchestrator | 2025-05-25 00:19:06.194338 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-25 00:19:06.292687 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:19:06.292867 | orchestrator | 2025-05-25 00:19:06.292889 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-25 00:19:07.469227 | orchestrator | changed: [testbed-manager] 2025-05-25 00:19:07.469332 | orchestrator | 2025-05-25 00:19:07.469348 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-25 00:19:07.534674 | orchestrator | ok: [testbed-manager] 2025-05-25 00:19:07.534809 | orchestrator | 2025-05-25 00:19:07.534824 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-25 00:20:07.592389 | orchestrator | Pausing for 60 seconds 2025-05-25 00:20:07.592509 | orchestrator | changed: [testbed-manager] 2025-05-25 00:20:07.592525 | orchestrator | 2025-05-25 00:20:07.592537 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-25 00:20:07.637635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-25 00:20:07.637665 | orchestrator | 2025-05-25 00:20:07.637676 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-25 00:24:19.189677 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-25 00:24:19.189780 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-25 00:24:19.189796 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-25 00:24:19.189807 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-25 00:24:19.189818 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-25 00:24:19.189829 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-25 00:24:19.189839 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-25 00:24:19.189850 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-25 00:24:19.189861 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-25 00:24:19.189895 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-25 00:24:19.189906 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-25 00:24:19.189970 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-25 00:24:19.189982 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-25 00:24:19.189993 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-25 00:24:19.190003 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-25 00:24:19.190083 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-25 00:24:19.190099 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-25 00:24:19.190110 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-25 00:24:19.190121 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-25 00:24:19.190132 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-25 00:24:19.190143 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-25 00:24:19.190153 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-05-25 00:24:19.190164 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-05-25 00:24:19.190175 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-05-25 00:24:19.190186 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:19.190198 | orchestrator | 2025-05-25 00:24:19.190209 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-25 00:24:19.190220 | orchestrator | 2025-05-25 00:24:19.190231 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 00:24:21.269519 | orchestrator | ok: [testbed-manager] 2025-05-25 00:24:21.269575 | orchestrator | 2025-05-25 00:24:21.269583 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-25 00:24:21.415573 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-25 00:24:21.415656 | orchestrator | 2025-05-25 00:24:21.415670 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-25 00:24:21.480146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-25 00:24:21.480229 | orchestrator | 2025-05-25 00:24:21.480244 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-25 00:24:23.379354 | orchestrator | ok: [testbed-manager] 2025-05-25 00:24:23.379442 | orchestrator | 2025-05-25 00:24:23.379457 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-25 00:24:23.438251 | orchestrator | ok: [testbed-manager] 2025-05-25 00:24:23.438309 | orchestrator | 2025-05-25 00:24:23.438322 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-25 00:24:23.542520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-25 00:24:23.542575 | orchestrator | 2025-05-25 00:24:23.542584 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-25 00:24:26.426265 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-25 00:24:26.426364 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-25 00:24:26.426380 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-25 00:24:26.426393 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-25 00:24:26.426428 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-25 00:24:26.426439 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-25 00:24:26.426450 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-25 00:24:26.426461 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-25 00:24:26.426472 | orchestrator | 2025-05-25 00:24:26.426484 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-25 00:24:27.079122 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:27.079227 | orchestrator | 2025-05-25 00:24:27.079243 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-25 00:24:27.156966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-25 00:24:27.157043 | orchestrator | 2025-05-25 00:24:27.157056 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-25 00:24:28.374692 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-25 00:24:28.374786 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-25 00:24:28.374800 | orchestrator | 2025-05-25 00:24:28.374813 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-25 00:24:29.013761 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:29.013863 | orchestrator | 2025-05-25 00:24:29.013877 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-25 00:24:29.082342 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:24:29.082431 | orchestrator | 2025-05-25 00:24:29.082446 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-25 00:24:29.151929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-25 00:24:29.152059 | orchestrator | 2025-05-25 00:24:29.152096 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-25 00:24:30.557487 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 00:24:30.557602 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 00:24:30.557618 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:30.557631 | orchestrator | 2025-05-25 00:24:30.557643 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-25 00:24:31.194432 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:31.194535 | orchestrator | 2025-05-25 00:24:31.194552 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-25 00:24:31.287745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-25 00:24:31.287844 | orchestrator | 2025-05-25 00:24:31.287858 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-25 00:24:32.512241 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 00:24:32.512355 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 00:24:32.512371 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:32.512384 | orchestrator | 2025-05-25 00:24:32.512396 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-25 00:24:33.172435 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:33.172538 | orchestrator | 2025-05-25 00:24:33.172554 | orchestrator | TASK [osism.services.manager : Copy inventory-reconciler environment file] ***** 2025-05-25 00:24:33.832397 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:33.832502 | orchestrator | 2025-05-25 00:24:33.832518 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-25 00:24:33.941305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-25 00:24:33.941395 | orchestrator | 2025-05-25 00:24:33.941409 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-25 00:24:34.530880 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:34.531038 | orchestrator | 2025-05-25 00:24:34.531068 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-25 00:24:34.945220 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:34.945337 | orchestrator | 2025-05-25 00:24:34.945350 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-25 00:24:36.289742 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-25 00:24:36.289882 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-25 00:24:36.289900 | orchestrator | 2025-05-25 00:24:36.289913 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-25 00:24:36.934385 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:36.934499 | orchestrator | 2025-05-25 00:24:36.934518 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-25 00:24:37.360275 | orchestrator | ok: [testbed-manager] 2025-05-25 00:24:37.360375 | orchestrator | 2025-05-25 00:24:37.360389 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-25 00:24:37.719388 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:37.719490 | orchestrator | 2025-05-25 00:24:37.719505 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-25 00:24:37.772686 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:24:37.772721 | orchestrator | 2025-05-25 00:24:37.772733 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-25 00:24:37.863855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-25 00:24:37.863916 | orchestrator | 2025-05-25 00:24:37.863929 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-25 00:24:37.912552 | orchestrator | ok: [testbed-manager] 2025-05-25 00:24:37.912633 | orchestrator | 2025-05-25 00:24:37.912648 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-25 00:24:39.920413 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-25 00:24:39.920524 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-25 00:24:39.920540 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-25 00:24:39.920552 | orchestrator | 2025-05-25 00:24:39.920564 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-25 00:24:40.641773 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:40.641876 | orchestrator | 2025-05-25 00:24:40.641913 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-25 00:24:41.350511 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:41.350617 | orchestrator | 2025-05-25 00:24:41.350634 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-25 00:24:42.077417 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:42.077525 | orchestrator | 2025-05-25 00:24:42.077540 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-25 00:24:42.164813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-25 00:24:42.164899 | orchestrator | 2025-05-25 00:24:42.164912 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-25 00:24:42.211366 | orchestrator | ok: [testbed-manager] 2025-05-25 00:24:42.211440 | orchestrator | 2025-05-25 00:24:42.211453 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-25 00:24:42.940494 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-25 00:24:42.940598 | orchestrator | 2025-05-25 00:24:42.940614 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-25 00:24:43.026919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-25 00:24:43.027057 | orchestrator | 2025-05-25 00:24:43.027071 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-25 00:24:43.764114 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:43.764221 | orchestrator | 2025-05-25 00:24:43.764238 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-25 00:24:44.403451 | orchestrator | ok: [testbed-manager] 2025-05-25 00:24:44.403581 | orchestrator | 2025-05-25 00:24:44.403620 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-25 00:24:44.466348 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:24:44.466447 | orchestrator | 2025-05-25 00:24:44.466462 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-25 00:24:44.525679 | orchestrator | ok: [testbed-manager] 2025-05-25 00:24:44.525771 | orchestrator | 2025-05-25 00:24:44.525794 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-25 00:24:45.339108 | orchestrator | changed: [testbed-manager] 2025-05-25 00:24:45.339208 | orchestrator | 2025-05-25 00:24:45.339223 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-25 00:25:27.578162 | orchestrator | changed: [testbed-manager] 2025-05-25 00:25:27.578285 | orchestrator | 2025-05-25 00:25:27.578303 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-25 00:25:28.230857 | orchestrator | ok: [testbed-manager] 2025-05-25 00:25:28.230965 | orchestrator | 2025-05-25 00:25:28.230982 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-25 00:25:31.024654 | orchestrator | changed: [testbed-manager] 2025-05-25 00:25:31.024760 | orchestrator | 2025-05-25 00:25:31.024776 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-25 00:25:31.090494 | orchestrator | ok: [testbed-manager] 2025-05-25 00:25:31.090574 | orchestrator | 2025-05-25 00:25:31.090588 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-25 00:25:31.090599 | orchestrator | 2025-05-25 00:25:31.090611 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-25 00:25:31.144373 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:25:31.144457 | orchestrator | 2025-05-25 00:25:31.144472 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-25 00:26:31.209523 | orchestrator | Pausing for 60 seconds 2025-05-25 00:26:31.209649 | orchestrator | changed: [testbed-manager] 2025-05-25 00:26:31.209665 | orchestrator | 2025-05-25 00:26:31.209679 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-25 00:26:36.769657 | orchestrator | changed: [testbed-manager] 2025-05-25 00:26:36.769773 | orchestrator | 2025-05-25 00:26:36.769790 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-25 00:27:18.471272 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-25 00:27:18.471394 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-25 00:27:18.471410 | orchestrator | changed: [testbed-manager] 2025-05-25 00:27:18.471423 | orchestrator | 2025-05-25 00:27:18.471435 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-25 00:27:24.240955 | orchestrator | changed: [testbed-manager] 2025-05-25 00:27:24.241074 | orchestrator | 2025-05-25 00:27:24.241091 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-25 00:27:24.320644 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-25 00:27:24.320735 | orchestrator | 2025-05-25 00:27:24.320748 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-25 00:27:24.320761 | orchestrator | 2025-05-25 00:27:24.320772 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-25 00:27:24.384488 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:27:24.384566 | orchestrator | 2025-05-25 00:27:24.384580 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:27:24.384593 | orchestrator | testbed-manager : ok=110 changed=58 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-25 00:27:24.384604 | orchestrator | 2025-05-25 00:27:24.499580 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-25 00:27:24.499673 | orchestrator | + deactivate 2025-05-25 00:27:24.499688 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-25 00:27:24.499701 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-25 00:27:24.499712 | orchestrator | + export PATH 2025-05-25 00:27:24.499723 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-25 00:27:24.499734 | orchestrator | + '[' -n '' ']' 2025-05-25 00:27:24.499777 | orchestrator | + hash -r 2025-05-25 00:27:24.499788 | orchestrator | + '[' -n '' ']' 2025-05-25 00:27:24.499799 | orchestrator | + unset VIRTUAL_ENV 2025-05-25 00:27:24.499809 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-25 00:27:24.499820 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-25 00:27:24.499831 | orchestrator | + unset -f deactivate 2025-05-25 00:27:24.499842 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-25 00:27:24.505036 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-25 00:27:24.505060 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-25 00:27:24.505071 | orchestrator | + local max_attempts=60 2025-05-25 00:27:24.505082 | orchestrator | + local name=ceph-ansible 2025-05-25 00:27:24.505092 | orchestrator | + local attempt_num=1 2025-05-25 00:27:24.506507 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-25 00:27:24.539579 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-25 00:27:24.539614 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-25 00:27:24.539625 | orchestrator | + local max_attempts=60 2025-05-25 00:27:24.539636 | orchestrator | + local name=kolla-ansible 2025-05-25 00:27:24.539647 | orchestrator | + local attempt_num=1 2025-05-25 00:27:24.541012 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-25 00:27:24.579400 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-25 00:27:24.579431 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-25 00:27:24.579444 | orchestrator | + local max_attempts=60 2025-05-25 00:27:24.579455 | orchestrator | + local name=osism-ansible 2025-05-25 00:27:24.579465 | orchestrator | + local attempt_num=1 2025-05-25 00:27:24.580212 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-25 00:27:24.609986 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-25 00:27:24.610075 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-25 00:27:24.610088 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-25 00:27:25.295059 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-25 00:27:25.488989 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-25 00:27:25.489085 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-25 00:27:25.489100 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-25 00:27:25.489265 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-25 00:27:25.489285 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-25 00:27:25.489302 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-25 00:27:25.489314 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-25 00:27:25.489325 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-25 00:27:25.489336 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-05-25 00:27:25.489346 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-25 00:27:25.489379 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-25 00:27:25.489390 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-25 00:27:25.489400 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-25 00:27:25.489411 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-25 00:27:25.489422 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-25 00:27:25.489446 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-25 00:27:25.489458 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-25 00:27:25.489469 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-25 00:27:25.494833 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-25 00:27:25.652534 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-25 00:27:25.652627 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-25 00:27:25.652639 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-25 00:27:25.652650 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-05-25 00:27:25.652662 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-05-25 00:27:25.660166 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-25 00:27:25.709796 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-25 00:27:25.709840 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-25 00:27:25.714573 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-25 00:27:27.246885 | orchestrator | 2025-05-25 00:27:27 | INFO  | Task 511d21f0-5ba9-437a-9dc7-71a771f11dce (resolvconf) was prepared for execution. 2025-05-25 00:27:27.246983 | orchestrator | 2025-05-25 00:27:27 | INFO  | It takes a moment until task 511d21f0-5ba9-437a-9dc7-71a771f11dce (resolvconf) has been started and output is visible here. 2025-05-25 00:27:30.193563 | orchestrator | 2025-05-25 00:27:30.194182 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-25 00:27:30.195331 | orchestrator | 2025-05-25 00:27:30.196410 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 00:27:30.196872 | orchestrator | Sunday 25 May 2025 00:27:30 +0000 (0:00:00.090) 0:00:00.090 ************ 2025-05-25 00:27:34.174095 | orchestrator | ok: [testbed-manager] 2025-05-25 00:27:34.174255 | orchestrator | 2025-05-25 00:27:34.176055 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-25 00:27:34.179693 | orchestrator | Sunday 25 May 2025 00:27:34 +0000 (0:00:03.985) 0:00:04.075 ************ 2025-05-25 00:27:34.234467 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:27:34.235075 | orchestrator | 2025-05-25 00:27:34.236909 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-25 00:27:34.238199 | orchestrator | Sunday 25 May 2025 00:27:34 +0000 (0:00:00.058) 0:00:04.134 ************ 2025-05-25 00:27:34.327406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-25 00:27:34.327477 | orchestrator | 2025-05-25 00:27:34.327491 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-25 00:27:34.327503 | orchestrator | Sunday 25 May 2025 00:27:34 +0000 (0:00:00.093) 0:00:04.227 ************ 2025-05-25 00:27:34.396974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-25 00:27:34.397890 | orchestrator | 2025-05-25 00:27:34.398877 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-25 00:27:34.399910 | orchestrator | Sunday 25 May 2025 00:27:34 +0000 (0:00:00.071) 0:00:04.299 ************ 2025-05-25 00:27:35.509109 | orchestrator | ok: [testbed-manager] 2025-05-25 00:27:35.509640 | orchestrator | 2025-05-25 00:27:35.510260 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-25 00:27:35.511580 | orchestrator | Sunday 25 May 2025 00:27:35 +0000 (0:00:01.109) 0:00:05.408 ************ 2025-05-25 00:27:35.569469 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:27:35.569710 | orchestrator | 2025-05-25 00:27:35.571545 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-25 00:27:35.572699 | orchestrator | Sunday 25 May 2025 00:27:35 +0000 (0:00:00.062) 0:00:05.471 ************ 2025-05-25 00:27:36.036359 | orchestrator | ok: [testbed-manager] 2025-05-25 00:27:36.036550 | orchestrator | 2025-05-25 00:27:36.036855 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-25 00:27:36.037078 | orchestrator | Sunday 25 May 2025 00:27:36 +0000 (0:00:00.466) 0:00:05.938 ************ 2025-05-25 00:27:36.110564 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:27:36.110643 | orchestrator | 2025-05-25 00:27:36.110755 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-25 00:27:36.114090 | orchestrator | Sunday 25 May 2025 00:27:36 +0000 (0:00:00.073) 0:00:06.011 ************ 2025-05-25 00:27:36.689509 | orchestrator | changed: [testbed-manager] 2025-05-25 00:27:36.690103 | orchestrator | 2025-05-25 00:27:36.692172 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-25 00:27:36.694124 | orchestrator | Sunday 25 May 2025 00:27:36 +0000 (0:00:00.578) 0:00:06.590 ************ 2025-05-25 00:27:37.796828 | orchestrator | changed: [testbed-manager] 2025-05-25 00:27:37.796919 | orchestrator | 2025-05-25 00:27:37.797971 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-25 00:27:37.798298 | orchestrator | Sunday 25 May 2025 00:27:37 +0000 (0:00:01.107) 0:00:07.697 ************ 2025-05-25 00:27:38.731466 | orchestrator | ok: [testbed-manager] 2025-05-25 00:27:38.731561 | orchestrator | 2025-05-25 00:27:38.732237 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-25 00:27:38.732597 | orchestrator | Sunday 25 May 2025 00:27:38 +0000 (0:00:00.933) 0:00:08.631 ************ 2025-05-25 00:27:38.812067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-25 00:27:38.812130 | orchestrator | 2025-05-25 00:27:38.812616 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-25 00:27:38.813098 | orchestrator | Sunday 25 May 2025 00:27:38 +0000 (0:00:00.081) 0:00:08.713 ************ 2025-05-25 00:27:39.947666 | orchestrator | changed: [testbed-manager] 2025-05-25 00:27:39.948441 | orchestrator | 2025-05-25 00:27:39.949059 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:27:39.949364 | orchestrator | 2025-05-25 00:27:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:27:39.949681 | orchestrator | 2025-05-25 00:27:39 | INFO  | Please wait and do not abort execution. 2025-05-25 00:27:39.950538 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 00:27:39.951333 | orchestrator | 2025-05-25 00:27:39.952039 | orchestrator | Sunday 25 May 2025 00:27:39 +0000 (0:00:01.134) 0:00:09.848 ************ 2025-05-25 00:27:39.952679 | orchestrator | =============================================================================== 2025-05-25 00:27:39.953426 | orchestrator | Gathering Facts --------------------------------------------------------- 3.99s 2025-05-25 00:27:39.954351 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.13s 2025-05-25 00:27:39.954768 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.11s 2025-05-25 00:27:39.955362 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2025-05-25 00:27:39.956326 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2025-05-25 00:27:39.956955 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.58s 2025-05-25 00:27:39.957688 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2025-05-25 00:27:39.958433 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-05-25 00:27:39.959039 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-05-25 00:27:39.959456 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-05-25 00:27:39.959877 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-05-25 00:27:39.961394 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-05-25 00:27:39.961958 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-05-25 00:27:40.340048 | orchestrator | + osism apply sshconfig 2025-05-25 00:27:41.733541 | orchestrator | 2025-05-25 00:27:41 | INFO  | Task ed0de639-c5ad-494c-8ee4-babc30915a5b (sshconfig) was prepared for execution. 2025-05-25 00:27:41.733645 | orchestrator | 2025-05-25 00:27:41 | INFO  | It takes a moment until task ed0de639-c5ad-494c-8ee4-babc30915a5b (sshconfig) has been started and output is visible here. 2025-05-25 00:27:44.693668 | orchestrator | 2025-05-25 00:27:44.693774 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-25 00:27:44.695299 | orchestrator | 2025-05-25 00:27:44.695328 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-25 00:27:44.695340 | orchestrator | Sunday 25 May 2025 00:27:44 +0000 (0:00:00.100) 0:00:00.100 ************ 2025-05-25 00:27:45.238191 | orchestrator | ok: [testbed-manager] 2025-05-25 00:27:45.238311 | orchestrator | 2025-05-25 00:27:45.238934 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-25 00:27:45.239711 | orchestrator | Sunday 25 May 2025 00:27:45 +0000 (0:00:00.546) 0:00:00.647 ************ 2025-05-25 00:27:45.711429 | orchestrator | changed: [testbed-manager] 2025-05-25 00:27:45.711844 | orchestrator | 2025-05-25 00:27:45.713567 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-25 00:27:45.713593 | orchestrator | Sunday 25 May 2025 00:27:45 +0000 (0:00:00.474) 0:00:01.121 ************ 2025-05-25 00:27:51.411235 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-25 00:27:51.411378 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-25 00:27:51.411521 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-25 00:27:51.411538 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-25 00:27:51.411563 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-25 00:27:51.411648 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-25 00:27:51.414184 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-25 00:27:51.414331 | orchestrator | 2025-05-25 00:27:51.414819 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-25 00:27:51.415273 | orchestrator | Sunday 25 May 2025 00:27:51 +0000 (0:00:05.696) 0:00:06.818 ************ 2025-05-25 00:27:51.490356 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:27:51.490439 | orchestrator | 2025-05-25 00:27:51.491231 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-25 00:27:51.491812 | orchestrator | Sunday 25 May 2025 00:27:51 +0000 (0:00:00.083) 0:00:06.901 ************ 2025-05-25 00:27:52.035921 | orchestrator | changed: [testbed-manager] 2025-05-25 00:27:52.036020 | orchestrator | 2025-05-25 00:27:52.036035 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:27:52.036048 | orchestrator | 2025-05-25 00:27:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:27:52.036060 | orchestrator | 2025-05-25 00:27:52 | INFO  | Please wait and do not abort execution. 2025-05-25 00:27:52.038449 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:27:52.038499 | orchestrator | 2025-05-25 00:27:52.039243 | orchestrator | Sunday 25 May 2025 00:27:52 +0000 (0:00:00.543) 0:00:07.445 ************ 2025-05-25 00:27:52.040254 | orchestrator | =============================================================================== 2025-05-25 00:27:52.041314 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.70s 2025-05-25 00:27:52.041843 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2025-05-25 00:27:52.042572 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2025-05-25 00:27:52.043202 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.47s 2025-05-25 00:27:52.043939 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-05-25 00:27:52.430504 | orchestrator | + osism apply known-hosts 2025-05-25 00:27:53.922891 | orchestrator | 2025-05-25 00:27:53 | INFO  | Task 729ee318-301b-4f44-8333-98d2725922ce (known-hosts) was prepared for execution. 2025-05-25 00:27:53.922966 | orchestrator | 2025-05-25 00:27:53 | INFO  | It takes a moment until task 729ee318-301b-4f44-8333-98d2725922ce (known-hosts) has been started and output is visible here. 2025-05-25 00:27:56.863009 | orchestrator | 2025-05-25 00:27:56.864190 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-25 00:27:56.864370 | orchestrator | 2025-05-25 00:27:56.864919 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-25 00:27:56.865691 | orchestrator | Sunday 25 May 2025 00:27:56 +0000 (0:00:00.105) 0:00:00.105 ************ 2025-05-25 00:28:02.857396 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-25 00:28:02.857726 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-25 00:28:02.857910 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-25 00:28:02.859316 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-25 00:28:02.859805 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-25 00:28:02.861501 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-25 00:28:02.861994 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-25 00:28:02.862752 | orchestrator | 2025-05-25 00:28:02.863647 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-25 00:28:02.864094 | orchestrator | Sunday 25 May 2025 00:28:02 +0000 (0:00:05.994) 0:00:06.101 ************ 2025-05-25 00:28:03.038613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-25 00:28:03.039495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-25 00:28:03.042064 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-25 00:28:03.042092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-25 00:28:03.043129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-25 00:28:03.043385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-25 00:28:03.044030 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-25 00:28:03.044945 | orchestrator | 2025-05-25 00:28:03.045314 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:03.046106 | orchestrator | Sunday 25 May 2025 00:28:03 +0000 (0:00:00.182) 0:00:06.283 ************ 2025-05-25 00:28:04.161457 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyEva0koE9D8TjpvC0FwvFUWlGs6VKd9+z80omnrcWpQtBFmAsfOL62MRNWO0b6x2Qg5zlnooWzdsv7b/OWHyoLtMbSO4L+t3BJtzs0tB+rgyf7hqLYZ6QJwUqijEd3d6RalS+ToVAx88ci8je+JsNa64UMcx5joaIFr3yLBE2dAOaeS4CM/Ov3pksG40ObaRMWBxE+JujIFs3NsB77ug3G9dymj7L0KIo2DzmCiZ0E+o/Sk2tPxjxMmndRrnD4GA1l/C8sR/Je5CsFSp7Ce4KVXqiCnX5AP7gYrLLZjk2KapQoStEqhJNPa3BiboFk5AUi4lF5EaT/HM0ijIM9pZ9nIj4TJahtoRBknymeEDGPbLbwtgqJ7sV5E8TlvKxwonsUOtJ480zHRY2JF1uwaiF+DIpMeie+rHyj1MAhfI+ePfVA1GzaDG7PNGusn4Dr1lgiCMRZKjK0KNH4drAqx5fMRNe3HpGV+dtd6eWObD9EJvlCYU6pVajlfinQ055Mwc=) 2025-05-25 00:28:04.161566 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGDUbgd7DW/B7NQOW/vehWuvC4OhyNUFJV+z6wPzkShz2OxO4HLY1ZpESe25JzASDITW0LYbSQm9B7mcLYHv/PQ=) 2025-05-25 00:28:04.161584 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILnxXnZsOp2J+Ab/u0BPgk3GwGdTAN8nbq4VyM/G29AJ) 2025-05-25 00:28:04.161598 | orchestrator | 2025-05-25 00:28:04.161610 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:04.161622 | orchestrator | Sunday 25 May 2025 00:28:04 +0000 (0:00:01.120) 0:00:07.404 ************ 2025-05-25 00:28:05.189854 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIpK7VXwMOPDTs/yub8PYRlWjMUFZan+6R7gOGA9YQo20Fc9bw1VokJJqDIblfa9A14AZHspMIvAd1wJEafO44g=) 2025-05-25 00:28:05.190695 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZaRazg/a11EgJOkKNMfynry7mZ8dd6MkWAtkVm+7lcZXzsSkY9rb6WnUU0b0Yp00MnFjRzO3KG1xoNABNLSo2wmSVQW7qOAWLZuXhHLbxh6XgWyJfPpYCwcAu1vomeLMxC1Ywmv7l1R72WafHq4yy08nG1VJjS7adRJAfrpMEK1go8PwU/b2WuV9glaq2AAgvmgn+7h4qEnZA/XexNPscWq4f1+sje3sZXjraMYCWI5uDCJnd6yc6fBzJ7+ur4V950HozC0m/DN0eHjhKK15iyGp1EvuCllzaC+1K4EJR0iQORd2+IfOfXTbmbGx/FzE0RG1XcnPM/a6mcoiZAkw+++JXJqzQKvVYDe3AoSp8KJffHmRRFUDvraEN0vI7bxVPyZy1/DPt6QPFmkJA286FDcDlSfIBtjk5Ul6GrbkltXO7JjzHuWtAhuHVFjoNrsXCLGyWbqRwD89GPOZJbnt2EbkJjcP5eTJjbg67uxoFf8j0Lh0W3ch7hFiNAT5Y+DU=) 2025-05-25 00:28:05.191372 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICJ6X8Sagmr9OG0c8EWlFyYdo+OODQ8FNuyUp7Mp+i5z) 2025-05-25 00:28:05.192323 | orchestrator | 2025-05-25 00:28:05.193220 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:05.193546 | orchestrator | Sunday 25 May 2025 00:28:05 +0000 (0:00:01.029) 0:00:08.434 ************ 2025-05-25 00:28:06.242773 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJd0iJ81Kxv1opxmb9VVip/qh9THwlzaS/dLjxikSy5Q) 2025-05-25 00:28:06.242881 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZemyvP6aOY4+tp9jAiloZhDK1/8GHm/Sc2TX5P0hE5nV1X82K/k6w21rAiurDEMOJ9TS6OsdiNRuOrcHm21nPaRrdB3c2nwdQzp5Wmlue/K3Cglv2LZi0S/PqvVbP2vQwGZ04PAp3zKoXA7303M+qpAAsO3OAVbTSapXlKLysIfD2CfAJN0W6g2jMYKGM5S9bpBlmBNASX+T9MLZ40C7EkdgW7/Abu9jptPS5qNxUBQeCcfaTm+3kXsEISu2gTvUPp4wJ6Lnp0KNGNYCyfA5a8Y2J1SvtdO0LyZJREtwUf4exP+7Gvr67waNJM6cWoBfTd25+GHnAh8a828b/HAppcrqbA3CbH/p1vJG7CByCbquWyogqguEHXXn1N+PkncwN1Z8S4d0VPCqxdG8HZUifR4GB2Z6B1DsHz22cpgKJuSi7O6dsl0Dy/KdjSi53A+JRds3YQ6i36tcQojRPuP+hACws+vS+OVULa9M51ZaQTxUrzqet86e+dwRdJB6Mv9c=) 2025-05-25 00:28:06.243406 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLgHZRcl0/PRWna7zPgwiyyAxSUQsarCkQ5uIYpHAQIylOZBbHbisZYdZI31HUkGSCsbWwUZgCZzU2kA67QZ6q0=) 2025-05-25 00:28:06.244360 | orchestrator | 2025-05-25 00:28:06.245300 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:06.245785 | orchestrator | Sunday 25 May 2025 00:28:06 +0000 (0:00:01.051) 0:00:09.486 ************ 2025-05-25 00:28:07.271951 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2JezhZeVgqGCxoHqc69wAOkwYEXo6BbOo1Hze49V+lihkXKrGsuIBNalfHyrD3B172rgJzKc+0juyLIB+uBIshYFacjj4c9zW446X9pp1oORFiezrLfp+cq7B5El4rC79ieWhm7wRIHDbjietSByOSbnZaLPDpi3KbLhxSfGKTLEsnxlABpTh3e6pzHgyEaK3ceaovg7sETMJoxWbL2BaolaeIYJdLYV5Xk5haSkcz6rfN/DKmjO/EwM5JDc/6dp+lFbfuMoTcEj9zWeVelCeie1T9/1JY9huPH055QQTHEczw6NZxvOOHS4lyN+/Z80/f0YujKxAVv6OcmnbnXekyGC+uTtlaamJoCCj3DyaKyAeyYDtxFNDZd/siIxljWyLzEaxfV6Uby+MpydwV6fCN7VpLGvEC4QiecPcDkFGiC6l4PRaw3OVuqvZ4COlvYGWDKdu2rsXZ2SoGVx6P/kk0b7OQZS2HyzkloqUbEM9VR0pWIZrUOZe/fGMlJpZFLE=) 2025-05-25 00:28:07.272852 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFlNs0VRG+eyxbWEqUsGxGwmHjWslbTyw2vBqKfEJ2bYqfcF0idKbKCdl8yj+ETY8g2K8pDJkoapycUdZt6d0Qk=) 2025-05-25 00:28:07.273269 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOTkhvoyBqJcN4uZdZHgtKBKQPoFS2w5+sR2cemWIMcI) 2025-05-25 00:28:07.274270 | orchestrator | 2025-05-25 00:28:07.274681 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:07.275559 | orchestrator | Sunday 25 May 2025 00:28:07 +0000 (0:00:01.029) 0:00:10.515 ************ 2025-05-25 00:28:08.366505 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaE6PbOG3aQ2Oc5TADxmNl/6Iw5WY8q05g1PlqvUGE8sl5nFOkkCoJUaiN1TpWcIWQNkHeCZZ10yb2m9Zk8d7z2t1UGifhpzIJpuupbf2hsZw0dDvSAiI97te1ybgRcxZ1kXNUa+wwxN8MP7pb3ZwKn80e327RHVrZROvjyBTscx2rLvV6KT9C7ZhvOMZrNO70ATkHRJ7GnYIj3nkZaeF40oMGYsAmrikXgb1JhJcjikMF2zGvf1BLvuBUQqdLy7qYEAVXBxpXwUg2G7dG8N1mp5yNFPWl3rZ4wgmasaszx8B5ih+SG4iWyoPzXisaRirW/+90hKvpM6sRPQTRi6D3ANvi6Z+zaRYS9+QYAYXc/PRoLI1ka/a/6fagAMmWAxnW2S+lyGH4I58rH1qQc28yVAT6a0g8MIaUZkZVJk+hJwFy72qqKIHdOcLwSGiCbAJqVmJLgr02MYxQS+P/Jxrs78pU/haxDbgXDZOBVXbsFqm1NcL8BjtIr6V7PCol/uM=) 2025-05-25 00:28:08.367290 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNNrlCNeHD1OuwdVmk8Kp9IjqfTvH1/iGOvfov3yy17F/UWDJWHXCZ5jt4QaULADB9+CeWr2O6VjNB70u+PlwIM=) 2025-05-25 00:28:08.369533 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE/WzpGEqIKr1KpCfQykUbJMEMk0Nqe8hxfv+tBwOdI5) 2025-05-25 00:28:08.370246 | orchestrator | 2025-05-25 00:28:08.372875 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:08.374127 | orchestrator | Sunday 25 May 2025 00:28:08 +0000 (0:00:01.094) 0:00:11.610 ************ 2025-05-25 00:28:09.470149 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIDKC+J6rjfkVH+eT9s0G1NlfNAv6y9E88vVKaJzsFILhmKYfg0WVrNLhyH/sX82w3b4iQZnXwBNF4UadGhsuBU=) 2025-05-25 00:28:09.470385 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDahMaZ/rECGeLi4MPmYJhicgTPyBaOYVBpPwMMBAg/d7GDl7djOfHsjgIx0a6xogLqNdq/Tyw8ptPP5PCTW/O7w+tngJuHhgnmpDlaDbx0ccJV0hL2vVu72MZNKoFsGtjqmi2T9gmriIJwueD440RuqFc+s1HihvG2nOYrmonR3ARBY2PZwAbtLMt+A/i3Y64pAjZciMie/B0393BOfwHDpgO9VstFrPOFi1oHE+VJJI1Q5IubrQ/KS0RTabU825ft3q3AiQQN6mRmgdddh/ntVckT37FZoHkkwR2rVRs2PTUUdCdj5DhZOiVPx2z9My6tVj337GLwZ4XsJ3sOn5PCMqFogWsUhBpL29lUIcLvEvOd8wYxGM/2kaC1ni3uTuks/PT/PvDJktTujcEKZmiB6YPx6jW41/jyRQGV7Z4qvG5FBxb8dYb+kP5CN9JgFmTpjXYd/5oeECM16RFmQS2M9RzRETyH/GEqn8jJCN/36GllSCir1GV98npkxZWmN0=) 2025-05-25 00:28:09.471698 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPdllf9HjY5aWtQoaHq+0qHlV1H2xTf6v3XjdPomkPAG) 2025-05-25 00:28:09.472281 | orchestrator | 2025-05-25 00:28:09.473556 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:09.474709 | orchestrator | Sunday 25 May 2025 00:28:09 +0000 (0:00:01.103) 0:00:12.714 ************ 2025-05-25 00:28:10.549064 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKirV7tzx+rbmNTKMu76fam4ZI3B+2slbFEk6D9aZFy+) 2025-05-25 00:28:10.549468 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC96EySgOvinkfRaJBfXCfwhM+/jkfB+fp/gA8oNZ8qg1/xqWxpNDrMUOc4+BJmCgv9GFIz5/pxbZLrned+6E6MD5TbaBPHKDRPSedcMI8Dpx6PAl9eE1T2/IlTdDp+FXwhnUJThyf/EWJ8WDKxhw+ORxbz9DB9LSqTDKH5Jw5P3/6fXKWqS5qmAkc3q+U4ZESFD/ddBYlvKJ5UWCqCOnXF7vFDO0FgYokh0P8+xuLblnL9zIPhVuKOk5ZWgjnrt6/sNlKln4zlak2kaIm1heAsw7iTa/kL+rXlkfFPxHtMR+yBJE68w8JQwa53DpaJyuwoaEVczF4zn0qD1eBEuSQ/a67FnXXH09oIUJFxrbp8kuAgMvQWgc7QspFn2T89AYBcODIS8mZsYpRJWvFdU7a2aK3ZWjKF6EnvqMadv97J12OUBAjSF24sX64fuZphBftD3SHepKIOjM7mnrIMYEs50GJrnnBZ7nCb62+2dqIMMDdyb6642KFMWGGzKQqtQw8=) 2025-05-25 00:28:10.550254 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB/gC7dZigA0eMBxjFebT4XNJESk7S8ftEJRk9+v9l5ziPt6kpg8HWSXnb6kekzXKd0FuQtZsx7FIRfDn81uuIk=) 2025-05-25 00:28:10.550471 | orchestrator | 2025-05-25 00:28:10.551164 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-25 00:28:10.551491 | orchestrator | Sunday 25 May 2025 00:28:10 +0000 (0:00:01.078) 0:00:13.792 ************ 2025-05-25 00:28:15.857012 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-25 00:28:15.857897 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-25 00:28:15.858745 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-25 00:28:15.859425 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-25 00:28:15.860100 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-25 00:28:15.860746 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-25 00:28:15.861904 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-25 00:28:15.862597 | orchestrator | 2025-05-25 00:28:15.863345 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-25 00:28:15.863972 | orchestrator | Sunday 25 May 2025 00:28:15 +0000 (0:00:05.302) 0:00:19.095 ************ 2025-05-25 00:28:16.012025 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-25 00:28:16.012405 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-25 00:28:16.013875 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-25 00:28:16.014532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-25 00:28:16.015248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-25 00:28:16.015482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-25 00:28:16.016153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-25 00:28:16.017543 | orchestrator | 2025-05-25 00:28:16.017795 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:16.017834 | orchestrator | Sunday 25 May 2025 00:28:16 +0000 (0:00:00.161) 0:00:19.257 ************ 2025-05-25 00:28:17.066166 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILnxXnZsOp2J+Ab/u0BPgk3GwGdTAN8nbq4VyM/G29AJ) 2025-05-25 00:28:17.066505 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyEva0koE9D8TjpvC0FwvFUWlGs6VKd9+z80omnrcWpQtBFmAsfOL62MRNWO0b6x2Qg5zlnooWzdsv7b/OWHyoLtMbSO4L+t3BJtzs0tB+rgyf7hqLYZ6QJwUqijEd3d6RalS+ToVAx88ci8je+JsNa64UMcx5joaIFr3yLBE2dAOaeS4CM/Ov3pksG40ObaRMWBxE+JujIFs3NsB77ug3G9dymj7L0KIo2DzmCiZ0E+o/Sk2tPxjxMmndRrnD4GA1l/C8sR/Je5CsFSp7Ce4KVXqiCnX5AP7gYrLLZjk2KapQoStEqhJNPa3BiboFk5AUi4lF5EaT/HM0ijIM9pZ9nIj4TJahtoRBknymeEDGPbLbwtgqJ7sV5E8TlvKxwonsUOtJ480zHRY2JF1uwaiF+DIpMeie+rHyj1MAhfI+ePfVA1GzaDG7PNGusn4Dr1lgiCMRZKjK0KNH4drAqx5fMRNe3HpGV+dtd6eWObD9EJvlCYU6pVajlfinQ055Mwc=) 2025-05-25 00:28:17.067238 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGDUbgd7DW/B7NQOW/vehWuvC4OhyNUFJV+z6wPzkShz2OxO4HLY1ZpESe25JzASDITW0LYbSQm9B7mcLYHv/PQ=) 2025-05-25 00:28:17.067614 | orchestrator | 2025-05-25 00:28:17.067961 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:17.068536 | orchestrator | Sunday 25 May 2025 00:28:17 +0000 (0:00:01.053) 0:00:20.310 ************ 2025-05-25 00:28:18.122717 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZaRazg/a11EgJOkKNMfynry7mZ8dd6MkWAtkVm+7lcZXzsSkY9rb6WnUU0b0Yp00MnFjRzO3KG1xoNABNLSo2wmSVQW7qOAWLZuXhHLbxh6XgWyJfPpYCwcAu1vomeLMxC1Ywmv7l1R72WafHq4yy08nG1VJjS7adRJAfrpMEK1go8PwU/b2WuV9glaq2AAgvmgn+7h4qEnZA/XexNPscWq4f1+sje3sZXjraMYCWI5uDCJnd6yc6fBzJ7+ur4V950HozC0m/DN0eHjhKK15iyGp1EvuCllzaC+1K4EJR0iQORd2+IfOfXTbmbGx/FzE0RG1XcnPM/a6mcoiZAkw+++JXJqzQKvVYDe3AoSp8KJffHmRRFUDvraEN0vI7bxVPyZy1/DPt6QPFmkJA286FDcDlSfIBtjk5Ul6GrbkltXO7JjzHuWtAhuHVFjoNrsXCLGyWbqRwD89GPOZJbnt2EbkJjcP5eTJjbg67uxoFf8j0Lh0W3ch7hFiNAT5Y+DU=) 2025-05-25 00:28:18.122825 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIpK7VXwMOPDTs/yub8PYRlWjMUFZan+6R7gOGA9YQo20Fc9bw1VokJJqDIblfa9A14AZHspMIvAd1wJEafO44g=) 2025-05-25 00:28:18.122846 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICJ6X8Sagmr9OG0c8EWlFyYdo+OODQ8FNuyUp7Mp+i5z) 2025-05-25 00:28:18.124102 | orchestrator | 2025-05-25 00:28:18.125277 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:18.126157 | orchestrator | Sunday 25 May 2025 00:28:18 +0000 (0:00:01.055) 0:00:21.365 ************ 2025-05-25 00:28:19.136855 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLgHZRcl0/PRWna7zPgwiyyAxSUQsarCkQ5uIYpHAQIylOZBbHbisZYdZI31HUkGSCsbWwUZgCZzU2kA67QZ6q0=) 2025-05-25 00:28:19.136966 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZemyvP6aOY4+tp9jAiloZhDK1/8GHm/Sc2TX5P0hE5nV1X82K/k6w21rAiurDEMOJ9TS6OsdiNRuOrcHm21nPaRrdB3c2nwdQzp5Wmlue/K3Cglv2LZi0S/PqvVbP2vQwGZ04PAp3zKoXA7303M+qpAAsO3OAVbTSapXlKLysIfD2CfAJN0W6g2jMYKGM5S9bpBlmBNASX+T9MLZ40C7EkdgW7/Abu9jptPS5qNxUBQeCcfaTm+3kXsEISu2gTvUPp4wJ6Lnp0KNGNYCyfA5a8Y2J1SvtdO0LyZJREtwUf4exP+7Gvr67waNJM6cWoBfTd25+GHnAh8a828b/HAppcrqbA3CbH/p1vJG7CByCbquWyogqguEHXXn1N+PkncwN1Z8S4d0VPCqxdG8HZUifR4GB2Z6B1DsHz22cpgKJuSi7O6dsl0Dy/KdjSi53A+JRds3YQ6i36tcQojRPuP+hACws+vS+OVULa9M51ZaQTxUrzqet86e+dwRdJB6Mv9c=) 2025-05-25 00:28:19.138581 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJd0iJ81Kxv1opxmb9VVip/qh9THwlzaS/dLjxikSy5Q) 2025-05-25 00:28:19.140152 | orchestrator | 2025-05-25 00:28:19.141301 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:19.142370 | orchestrator | Sunday 25 May 2025 00:28:19 +0000 (0:00:01.014) 0:00:22.380 ************ 2025-05-25 00:28:20.157501 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOTkhvoyBqJcN4uZdZHgtKBKQPoFS2w5+sR2cemWIMcI) 2025-05-25 00:28:20.158818 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2JezhZeVgqGCxoHqc69wAOkwYEXo6BbOo1Hze49V+lihkXKrGsuIBNalfHyrD3B172rgJzKc+0juyLIB+uBIshYFacjj4c9zW446X9pp1oORFiezrLfp+cq7B5El4rC79ieWhm7wRIHDbjietSByOSbnZaLPDpi3KbLhxSfGKTLEsnxlABpTh3e6pzHgyEaK3ceaovg7sETMJoxWbL2BaolaeIYJdLYV5Xk5haSkcz6rfN/DKmjO/EwM5JDc/6dp+lFbfuMoTcEj9zWeVelCeie1T9/1JY9huPH055QQTHEczw6NZxvOOHS4lyN+/Z80/f0YujKxAVv6OcmnbnXekyGC+uTtlaamJoCCj3DyaKyAeyYDtxFNDZd/siIxljWyLzEaxfV6Uby+MpydwV6fCN7VpLGvEC4QiecPcDkFGiC6l4PRaw3OVuqvZ4COlvYGWDKdu2rsXZ2SoGVx6P/kk0b7OQZS2HyzkloqUbEM9VR0pWIZrUOZe/fGMlJpZFLE=) 2025-05-25 00:28:20.160081 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFlNs0VRG+eyxbWEqUsGxGwmHjWslbTyw2vBqKfEJ2bYqfcF0idKbKCdl8yj+ETY8g2K8pDJkoapycUdZt6d0Qk=) 2025-05-25 00:28:20.160717 | orchestrator | 2025-05-25 00:28:20.161408 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:20.161834 | orchestrator | Sunday 25 May 2025 00:28:20 +0000 (0:00:01.021) 0:00:23.401 ************ 2025-05-25 00:28:21.219347 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE/WzpGEqIKr1KpCfQykUbJMEMk0Nqe8hxfv+tBwOdI5) 2025-05-25 00:28:21.219463 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaE6PbOG3aQ2Oc5TADxmNl/6Iw5WY8q05g1PlqvUGE8sl5nFOkkCoJUaiN1TpWcIWQNkHeCZZ10yb2m9Zk8d7z2t1UGifhpzIJpuupbf2hsZw0dDvSAiI97te1ybgRcxZ1kXNUa+wwxN8MP7pb3ZwKn80e327RHVrZROvjyBTscx2rLvV6KT9C7ZhvOMZrNO70ATkHRJ7GnYIj3nkZaeF40oMGYsAmrikXgb1JhJcjikMF2zGvf1BLvuBUQqdLy7qYEAVXBxpXwUg2G7dG8N1mp5yNFPWl3rZ4wgmasaszx8B5ih+SG4iWyoPzXisaRirW/+90hKvpM6sRPQTRi6D3ANvi6Z+zaRYS9+QYAYXc/PRoLI1ka/a/6fagAMmWAxnW2S+lyGH4I58rH1qQc28yVAT6a0g8MIaUZkZVJk+hJwFy72qqKIHdOcLwSGiCbAJqVmJLgr02MYxQS+P/Jxrs78pU/haxDbgXDZOBVXbsFqm1NcL8BjtIr6V7PCol/uM=) 2025-05-25 00:28:21.220092 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNNrlCNeHD1OuwdVmk8Kp9IjqfTvH1/iGOvfov3yy17F/UWDJWHXCZ5jt4QaULADB9+CeWr2O6VjNB70u+PlwIM=) 2025-05-25 00:28:21.220808 | orchestrator | 2025-05-25 00:28:21.221112 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:21.221664 | orchestrator | Sunday 25 May 2025 00:28:21 +0000 (0:00:01.060) 0:00:24.462 ************ 2025-05-25 00:28:22.235322 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDahMaZ/rECGeLi4MPmYJhicgTPyBaOYVBpPwMMBAg/d7GDl7djOfHsjgIx0a6xogLqNdq/Tyw8ptPP5PCTW/O7w+tngJuHhgnmpDlaDbx0ccJV0hL2vVu72MZNKoFsGtjqmi2T9gmriIJwueD440RuqFc+s1HihvG2nOYrmonR3ARBY2PZwAbtLMt+A/i3Y64pAjZciMie/B0393BOfwHDpgO9VstFrPOFi1oHE+VJJI1Q5IubrQ/KS0RTabU825ft3q3AiQQN6mRmgdddh/ntVckT37FZoHkkwR2rVRs2PTUUdCdj5DhZOiVPx2z9My6tVj337GLwZ4XsJ3sOn5PCMqFogWsUhBpL29lUIcLvEvOd8wYxGM/2kaC1ni3uTuks/PT/PvDJktTujcEKZmiB6YPx6jW41/jyRQGV7Z4qvG5FBxb8dYb+kP5CN9JgFmTpjXYd/5oeECM16RFmQS2M9RzRETyH/GEqn8jJCN/36GllSCir1GV98npkxZWmN0=) 2025-05-25 00:28:22.236285 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIDKC+J6rjfkVH+eT9s0G1NlfNAv6y9E88vVKaJzsFILhmKYfg0WVrNLhyH/sX82w3b4iQZnXwBNF4UadGhsuBU=) 2025-05-25 00:28:22.236859 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPdllf9HjY5aWtQoaHq+0qHlV1H2xTf6v3XjdPomkPAG) 2025-05-25 00:28:22.237601 | orchestrator | 2025-05-25 00:28:22.238389 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-25 00:28:22.238897 | orchestrator | Sunday 25 May 2025 00:28:22 +0000 (0:00:01.016) 0:00:25.479 ************ 2025-05-25 00:28:23.303996 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKirV7tzx+rbmNTKMu76fam4ZI3B+2slbFEk6D9aZFy+) 2025-05-25 00:28:23.304245 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC96EySgOvinkfRaJBfXCfwhM+/jkfB+fp/gA8oNZ8qg1/xqWxpNDrMUOc4+BJmCgv9GFIz5/pxbZLrned+6E6MD5TbaBPHKDRPSedcMI8Dpx6PAl9eE1T2/IlTdDp+FXwhnUJThyf/EWJ8WDKxhw+ORxbz9DB9LSqTDKH5Jw5P3/6fXKWqS5qmAkc3q+U4ZESFD/ddBYlvKJ5UWCqCOnXF7vFDO0FgYokh0P8+xuLblnL9zIPhVuKOk5ZWgjnrt6/sNlKln4zlak2kaIm1heAsw7iTa/kL+rXlkfFPxHtMR+yBJE68w8JQwa53DpaJyuwoaEVczF4zn0qD1eBEuSQ/a67FnXXH09oIUJFxrbp8kuAgMvQWgc7QspFn2T89AYBcODIS8mZsYpRJWvFdU7a2aK3ZWjKF6EnvqMadv97J12OUBAjSF24sX64fuZphBftD3SHepKIOjM7mnrIMYEs50GJrnnBZ7nCb62+2dqIMMDdyb6642KFMWGGzKQqtQw8=) 2025-05-25 00:28:23.305797 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB/gC7dZigA0eMBxjFebT4XNJESk7S8ftEJRk9+v9l5ziPt6kpg8HWSXnb6kekzXKd0FuQtZsx7FIRfDn81uuIk=) 2025-05-25 00:28:23.306297 | orchestrator | 2025-05-25 00:28:23.307521 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-25 00:28:23.308384 | orchestrator | Sunday 25 May 2025 00:28:23 +0000 (0:00:01.068) 0:00:26.547 ************ 2025-05-25 00:28:23.485720 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-25 00:28:23.486520 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-25 00:28:23.486757 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-25 00:28:23.488675 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-25 00:28:23.489026 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-25 00:28:23.489419 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-25 00:28:23.490116 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-25 00:28:23.490559 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:28:23.491043 | orchestrator | 2025-05-25 00:28:23.491834 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-25 00:28:23.492074 | orchestrator | Sunday 25 May 2025 00:28:23 +0000 (0:00:00.182) 0:00:26.730 ************ 2025-05-25 00:28:23.543127 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:28:23.543234 | orchestrator | 2025-05-25 00:28:23.544593 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-25 00:28:23.545024 | orchestrator | Sunday 25 May 2025 00:28:23 +0000 (0:00:00.058) 0:00:26.788 ************ 2025-05-25 00:28:23.594428 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:28:23.596983 | orchestrator | 2025-05-25 00:28:23.597375 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-25 00:28:23.598377 | orchestrator | Sunday 25 May 2025 00:28:23 +0000 (0:00:00.052) 0:00:26.840 ************ 2025-05-25 00:28:24.344035 | orchestrator | changed: [testbed-manager] 2025-05-25 00:28:24.344507 | orchestrator | 2025-05-25 00:28:24.345512 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:28:24.346001 | orchestrator | 2025-05-25 00:28:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:28:24.346417 | orchestrator | 2025-05-25 00:28:24 | INFO  | Please wait and do not abort execution. 2025-05-25 00:28:24.347608 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 00:28:24.348780 | orchestrator | 2025-05-25 00:28:24.349882 | orchestrator | Sunday 25 May 2025 00:28:24 +0000 (0:00:00.746) 0:00:27.587 ************ 2025-05-25 00:28:24.350109 | orchestrator | =============================================================================== 2025-05-25 00:28:24.351092 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.00s 2025-05-25 00:28:24.351667 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.30s 2025-05-25 00:28:24.352838 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-05-25 00:28:24.354235 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-25 00:28:24.354310 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-25 00:28:24.354810 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-25 00:28:24.355451 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-25 00:28:24.355901 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-25 00:28:24.356343 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-25 00:28:24.356688 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-25 00:28:24.357528 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-25 00:28:24.357895 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-25 00:28:24.358739 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-25 00:28:24.359087 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-25 00:28:24.359676 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-25 00:28:24.360259 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-25 00:28:24.361173 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.75s 2025-05-25 00:28:24.361441 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-05-25 00:28:24.361957 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2025-05-25 00:28:24.362461 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-05-25 00:28:24.711036 | orchestrator | + osism apply squid 2025-05-25 00:28:26.146675 | orchestrator | 2025-05-25 00:28:26 | INFO  | Task f5498a97-5783-4839-ae32-bd98ab857162 (squid) was prepared for execution. 2025-05-25 00:28:26.146763 | orchestrator | 2025-05-25 00:28:26 | INFO  | It takes a moment until task f5498a97-5783-4839-ae32-bd98ab857162 (squid) has been started and output is visible here. 2025-05-25 00:28:29.140821 | orchestrator | 2025-05-25 00:28:29.142122 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-25 00:28:29.142156 | orchestrator | 2025-05-25 00:28:29.142451 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-25 00:28:29.143368 | orchestrator | Sunday 25 May 2025 00:28:29 +0000 (0:00:00.105) 0:00:00.105 ************ 2025-05-25 00:28:29.231156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-25 00:28:29.231486 | orchestrator | 2025-05-25 00:28:29.233051 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-25 00:28:29.233314 | orchestrator | Sunday 25 May 2025 00:28:29 +0000 (0:00:00.093) 0:00:00.198 ************ 2025-05-25 00:28:30.653154 | orchestrator | ok: [testbed-manager] 2025-05-25 00:28:30.653286 | orchestrator | 2025-05-25 00:28:30.653304 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-25 00:28:30.654077 | orchestrator | Sunday 25 May 2025 00:28:30 +0000 (0:00:01.419) 0:00:01.617 ************ 2025-05-25 00:28:31.786826 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-25 00:28:31.787940 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-25 00:28:31.787969 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-25 00:28:31.788532 | orchestrator | 2025-05-25 00:28:31.789175 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-25 00:28:31.789894 | orchestrator | Sunday 25 May 2025 00:28:31 +0000 (0:00:01.135) 0:00:02.753 ************ 2025-05-25 00:28:32.860708 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-25 00:28:32.861596 | orchestrator | 2025-05-25 00:28:32.862216 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-25 00:28:32.862932 | orchestrator | Sunday 25 May 2025 00:28:32 +0000 (0:00:01.073) 0:00:03.827 ************ 2025-05-25 00:28:33.211122 | orchestrator | ok: [testbed-manager] 2025-05-25 00:28:33.211344 | orchestrator | 2025-05-25 00:28:33.211624 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-25 00:28:33.212085 | orchestrator | Sunday 25 May 2025 00:28:33 +0000 (0:00:00.352) 0:00:04.179 ************ 2025-05-25 00:28:34.241973 | orchestrator | changed: [testbed-manager] 2025-05-25 00:28:34.242134 | orchestrator | 2025-05-25 00:28:34.242150 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-25 00:28:34.242451 | orchestrator | Sunday 25 May 2025 00:28:34 +0000 (0:00:01.029) 0:00:05.208 ************ 2025-05-25 00:29:05.916608 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-25 00:29:05.916730 | orchestrator | ok: [testbed-manager] 2025-05-25 00:29:05.916746 | orchestrator | 2025-05-25 00:29:05.916758 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-25 00:29:05.916771 | orchestrator | Sunday 25 May 2025 00:29:05 +0000 (0:00:31.667) 0:00:36.876 ************ 2025-05-25 00:29:18.410425 | orchestrator | changed: [testbed-manager] 2025-05-25 00:29:18.410547 | orchestrator | 2025-05-25 00:29:18.410564 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-25 00:29:18.410578 | orchestrator | Sunday 25 May 2025 00:29:18 +0000 (0:00:12.495) 0:00:49.371 ************ 2025-05-25 00:30:18.496799 | orchestrator | Pausing for 60 seconds 2025-05-25 00:30:18.496918 | orchestrator | changed: [testbed-manager] 2025-05-25 00:30:18.496935 | orchestrator | 2025-05-25 00:30:18.496948 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-25 00:30:18.496960 | orchestrator | Sunday 25 May 2025 00:30:18 +0000 (0:01:00.085) 0:01:49.457 ************ 2025-05-25 00:30:18.567459 | orchestrator | ok: [testbed-manager] 2025-05-25 00:30:18.567524 | orchestrator | 2025-05-25 00:30:18.568181 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-25 00:30:18.568691 | orchestrator | Sunday 25 May 2025 00:30:18 +0000 (0:00:00.076) 0:01:49.534 ************ 2025-05-25 00:30:19.166927 | orchestrator | changed: [testbed-manager] 2025-05-25 00:30:19.167086 | orchestrator | 2025-05-25 00:30:19.167879 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:30:19.168545 | orchestrator | 2025-05-25 00:30:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:30:19.168569 | orchestrator | 2025-05-25 00:30:19 | INFO  | Please wait and do not abort execution. 2025-05-25 00:30:19.169413 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:30:19.170136 | orchestrator | 2025-05-25 00:30:19.170852 | orchestrator | Sunday 25 May 2025 00:30:19 +0000 (0:00:00.599) 0:01:50.133 ************ 2025-05-25 00:30:19.171703 | orchestrator | =============================================================================== 2025-05-25 00:30:19.172417 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-05-25 00:30:19.173609 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.67s 2025-05-25 00:30:19.173921 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.50s 2025-05-25 00:30:19.174710 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.42s 2025-05-25 00:30:19.175018 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.14s 2025-05-25 00:30:19.175697 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2025-05-25 00:30:19.175919 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.03s 2025-05-25 00:30:19.176484 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-05-25 00:30:19.176919 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-05-25 00:30:19.177983 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-05-25 00:30:19.178003 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-05-25 00:30:19.593709 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-25 00:30:19.593802 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-25 00:30:19.599914 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-25 00:30:19.647290 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-25 00:30:19.647359 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-25 00:30:19.647374 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-25 00:30:19.651749 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-25 00:30:19.656734 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-25 00:30:19.660753 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-25 00:30:21.061118 | orchestrator | 2025-05-25 00:30:21 | INFO  | Task 123f4606-1bf5-4e94-98a2-5bc5dd13294d (operator) was prepared for execution. 2025-05-25 00:30:21.061222 | orchestrator | 2025-05-25 00:30:21 | INFO  | It takes a moment until task 123f4606-1bf5-4e94-98a2-5bc5dd13294d (operator) has been started and output is visible here. 2025-05-25 00:30:24.057390 | orchestrator | 2025-05-25 00:30:24.061125 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-25 00:30:24.061245 | orchestrator | 2025-05-25 00:30:24.061535 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-25 00:30:24.062427 | orchestrator | Sunday 25 May 2025 00:30:24 +0000 (0:00:00.087) 0:00:00.087 ************ 2025-05-25 00:30:27.406078 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:30:27.406467 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:30:27.406498 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:30:27.406544 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:30:27.406795 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:30:27.407116 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:30:27.410184 | orchestrator | 2025-05-25 00:30:27.410210 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-25 00:30:27.410221 | orchestrator | Sunday 25 May 2025 00:30:27 +0000 (0:00:03.351) 0:00:03.438 ************ 2025-05-25 00:30:28.178686 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:30:28.179134 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:30:28.182179 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:30:28.182460 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:30:28.182994 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:30:28.183549 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:30:28.183984 | orchestrator | 2025-05-25 00:30:28.184355 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-25 00:30:28.184829 | orchestrator | 2025-05-25 00:30:28.189158 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-25 00:30:28.189506 | orchestrator | Sunday 25 May 2025 00:30:28 +0000 (0:00:00.772) 0:00:04.211 ************ 2025-05-25 00:30:28.249520 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:30:28.294590 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:30:28.313917 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:30:28.358449 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:30:28.358686 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:30:28.362191 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:30:28.362500 | orchestrator | 2025-05-25 00:30:28.363150 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-25 00:30:28.363542 | orchestrator | Sunday 25 May 2025 00:30:28 +0000 (0:00:00.179) 0:00:04.391 ************ 2025-05-25 00:30:28.418758 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:30:28.444357 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:30:28.457818 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:30:28.500779 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:30:28.501149 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:30:28.501526 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:30:28.502170 | orchestrator | 2025-05-25 00:30:28.502500 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-25 00:30:28.502902 | orchestrator | Sunday 25 May 2025 00:30:28 +0000 (0:00:00.142) 0:00:04.533 ************ 2025-05-25 00:30:29.101888 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:30:29.103436 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:30:29.106901 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:30:29.106956 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:30:29.107492 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:30:29.108742 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:30:29.109523 | orchestrator | 2025-05-25 00:30:29.110332 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-25 00:30:29.110743 | orchestrator | Sunday 25 May 2025 00:30:29 +0000 (0:00:00.599) 0:00:05.133 ************ 2025-05-25 00:30:29.889355 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:30:29.891710 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:30:29.891805 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:30:29.891819 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:30:29.891831 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:30:29.892104 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:30:29.893156 | orchestrator | 2025-05-25 00:30:29.894213 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-25 00:30:29.895143 | orchestrator | Sunday 25 May 2025 00:30:29 +0000 (0:00:00.786) 0:00:05.919 ************ 2025-05-25 00:30:31.033510 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-25 00:30:31.034377 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-25 00:30:31.038418 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-25 00:30:31.038452 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-25 00:30:31.038465 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-25 00:30:31.038477 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-25 00:30:31.038890 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-25 00:30:31.041913 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-25 00:30:31.042093 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-25 00:30:31.044680 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-25 00:30:31.045109 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-25 00:30:31.045830 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-25 00:30:31.046092 | orchestrator | 2025-05-25 00:30:31.046724 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-25 00:30:31.048045 | orchestrator | Sunday 25 May 2025 00:30:31 +0000 (0:00:01.142) 0:00:07.062 ************ 2025-05-25 00:30:32.265118 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:30:32.265220 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:30:32.265372 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:30:32.265809 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:30:32.266676 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:30:32.268135 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:30:32.268499 | orchestrator | 2025-05-25 00:30:32.269008 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-25 00:30:32.269469 | orchestrator | Sunday 25 May 2025 00:30:32 +0000 (0:00:01.230) 0:00:08.292 ************ 2025-05-25 00:30:33.444039 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-25 00:30:33.445228 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-25 00:30:33.446068 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-25 00:30:33.533829 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 00:30:33.533923 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 00:30:33.533938 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 00:30:33.533950 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 00:30:33.534728 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 00:30:33.535413 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-25 00:30:33.536591 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-25 00:30:33.538295 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-25 00:30:33.538704 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-25 00:30:33.539847 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-25 00:30:33.540973 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-25 00:30:33.542687 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-25 00:30:33.543505 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-25 00:30:33.545066 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-25 00:30:33.545432 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-25 00:30:33.546397 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-25 00:30:33.547322 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-25 00:30:33.548728 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-25 00:30:33.549164 | orchestrator | 2025-05-25 00:30:33.550135 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-25 00:30:33.550841 | orchestrator | Sunday 25 May 2025 00:30:33 +0000 (0:00:01.267) 0:00:09.560 ************ 2025-05-25 00:30:34.162875 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:30:34.163112 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:30:34.164148 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:30:34.165354 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:30:34.166586 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:30:34.167988 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:30:34.171663 | orchestrator | 2025-05-25 00:30:34.172689 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-25 00:30:34.173051 | orchestrator | Sunday 25 May 2025 00:30:34 +0000 (0:00:00.633) 0:00:10.194 ************ 2025-05-25 00:30:34.256558 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:30:34.279804 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:30:34.324041 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:30:34.325398 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:30:34.325434 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:30:34.326205 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:30:34.329588 | orchestrator | 2025-05-25 00:30:34.330095 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-25 00:30:34.330445 | orchestrator | Sunday 25 May 2025 00:30:34 +0000 (0:00:00.162) 0:00:10.356 ************ 2025-05-25 00:30:35.151714 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-25 00:30:35.153006 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-25 00:30:35.155613 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:30:35.156037 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-25 00:30:35.156371 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:30:35.156795 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:30:35.157088 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-25 00:30:35.157875 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:30:35.158237 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 00:30:35.158601 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:30:35.158903 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-25 00:30:35.159344 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:30:35.159897 | orchestrator | 2025-05-25 00:30:35.161841 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-25 00:30:35.163026 | orchestrator | Sunday 25 May 2025 00:30:35 +0000 (0:00:00.826) 0:00:11.182 ************ 2025-05-25 00:30:35.195310 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:30:35.218464 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:30:35.262840 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:30:35.302473 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:30:35.303378 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:30:35.307117 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:30:35.307990 | orchestrator | 2025-05-25 00:30:35.309010 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-25 00:30:35.311774 | orchestrator | Sunday 25 May 2025 00:30:35 +0000 (0:00:00.152) 0:00:11.335 ************ 2025-05-25 00:30:35.349797 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:30:35.368787 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:30:35.389490 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:30:35.410241 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:30:35.439424 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:30:35.440007 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:30:35.440846 | orchestrator | 2025-05-25 00:30:35.441192 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-25 00:30:35.444679 | orchestrator | Sunday 25 May 2025 00:30:35 +0000 (0:00:00.137) 0:00:11.473 ************ 2025-05-25 00:30:35.514365 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:30:35.534914 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:30:35.553599 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:30:35.581414 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:30:35.582182 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:30:35.585626 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:30:35.587639 | orchestrator | 2025-05-25 00:30:35.588413 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-25 00:30:35.588970 | orchestrator | Sunday 25 May 2025 00:30:35 +0000 (0:00:00.141) 0:00:11.614 ************ 2025-05-25 00:30:36.258410 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:30:36.261324 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:30:36.261359 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:30:36.261372 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:30:36.261384 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:30:36.262158 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:30:36.265474 | orchestrator | 2025-05-25 00:30:36.267073 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-25 00:30:36.267630 | orchestrator | Sunday 25 May 2025 00:30:36 +0000 (0:00:00.674) 0:00:12.289 ************ 2025-05-25 00:30:36.350881 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:30:36.363679 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:30:36.464078 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:30:36.464710 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:30:36.465208 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:30:36.466136 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:30:36.467343 | orchestrator | 2025-05-25 00:30:36.467974 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:30:36.468965 | orchestrator | 2025-05-25 00:30:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:30:36.468990 | orchestrator | 2025-05-25 00:30:36 | INFO  | Please wait and do not abort execution. 2025-05-25 00:30:36.469621 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 00:30:36.470638 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 00:30:36.471407 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 00:30:36.472462 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 00:30:36.473315 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 00:30:36.474307 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 00:30:36.475000 | orchestrator | 2025-05-25 00:30:36.475794 | orchestrator | Sunday 25 May 2025 00:30:36 +0000 (0:00:00.207) 0:00:12.497 ************ 2025-05-25 00:30:36.476611 | orchestrator | =============================================================================== 2025-05-25 00:30:36.477156 | orchestrator | Gathering Facts --------------------------------------------------------- 3.35s 2025-05-25 00:30:36.477710 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2025-05-25 00:30:36.478154 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.23s 2025-05-25 00:30:36.478762 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.14s 2025-05-25 00:30:36.479341 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.83s 2025-05-25 00:30:36.480117 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2025-05-25 00:30:36.480699 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2025-05-25 00:30:36.481569 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-05-25 00:30:36.481949 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.63s 2025-05-25 00:30:36.482737 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-05-25 00:30:36.483190 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-05-25 00:30:36.483854 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-05-25 00:30:36.484618 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-05-25 00:30:36.489076 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2025-05-25 00:30:36.489366 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2025-05-25 00:30:36.491054 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-05-25 00:30:36.491852 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-05-25 00:30:36.927858 | orchestrator | + osism apply --environment custom facts 2025-05-25 00:30:38.271642 | orchestrator | 2025-05-25 00:30:38 | INFO  | Trying to run play facts in environment custom 2025-05-25 00:30:38.318430 | orchestrator | 2025-05-25 00:30:38 | INFO  | Task 670c01aa-bd72-4552-b2c4-3956e1b875ac (facts) was prepared for execution. 2025-05-25 00:30:38.318513 | orchestrator | 2025-05-25 00:30:38 | INFO  | It takes a moment until task 670c01aa-bd72-4552-b2c4-3956e1b875ac (facts) has been started and output is visible here. 2025-05-25 00:30:40.543664 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2025-05-25 00:30:40.543755 | orchestrator | -vvvv to see details 2025-05-25 00:30:41.014826 | orchestrator | 2025-05-25 00:30:41.015201 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-25 00:30:41.020217 | orchestrator | 2025-05-25 00:30:41.022074 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-25 00:30:41.639858 | orchestrator | fatal: [testbed-node-4]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.14\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.14: Permission denied (publickey).\r\n", "unreachable": true} 2025-05-25 00:30:41.640051 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2025-05-25 00:30:41.640109 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2025-05-25 00:30:41.640188 | orchestrator | fatal: [testbed-node-5]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.15\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.15: Permission denied (publickey).\r\n", "unreachable": true} 2025-05-25 00:30:41.643302 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2025-05-25 00:30:41.643414 | orchestrator | fatal: [testbed-node-3]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.13\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.13: Permission denied (publickey).\r\n", "unreachable": true} 2025-05-25 00:30:41.644258 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2025-05-25 00:30:41.645114 | orchestrator | 2025-05-25 00:30:41.645139 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:30:41.645171 | orchestrator | 2025-05-25 00:30:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:30:41.645210 | orchestrator | 2025-05-25 00:30:41 | INFO  | Please wait and do not abort execution. 2025-05-25 00:30:41.645569 | orchestrator | testbed-manager : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:30:41.645863 | orchestrator | testbed-node-0 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:30:41.646549 | orchestrator | testbed-node-1 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:30:41.646854 | orchestrator | testbed-node-2 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:30:41.647225 | orchestrator | testbed-node-3 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:30:41.647620 | orchestrator | testbed-node-4 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:30:41.647914 | orchestrator | testbed-node-5 : ok=0 changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:30:41.648346 | orchestrator | 2025-05-25 00:30:41.819704 | orchestrator | 2025-05-25 00:30:41 | INFO  | Trying to run play facts in environment custom 2025-05-25 00:30:41.821522 | orchestrator | 2025-05-25 00:30:41 | INFO  | Task 011a80dc-5079-4a96-9ba7-62a1a57e377a (facts) was prepared for execution. 2025-05-25 00:30:41.822114 | orchestrator | 2025-05-25 00:30:41 | INFO  | It takes a moment until task 011a80dc-5079-4a96-9ba7-62a1a57e377a (facts) has been started and output is visible here. 2025-05-25 00:30:44.745647 | orchestrator | 2025-05-25 00:30:44.751370 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-25 00:30:44.751411 | orchestrator | 2025-05-25 00:30:44.752257 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-25 00:30:44.752300 | orchestrator | Sunday 25 May 2025 00:30:44 +0000 (0:00:00.081) 0:00:00.081 ************ 2025-05-25 00:30:46.022523 | orchestrator | ok: [testbed-manager] 2025-05-25 00:30:47.076780 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:30:47.076901 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:30:47.076927 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:30:47.077262 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:30:47.077415 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:30:47.078294 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:30:47.079201 | orchestrator | 2025-05-25 00:30:47.079828 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-25 00:30:47.080116 | orchestrator | Sunday 25 May 2025 00:30:47 +0000 (0:00:02.329) 0:00:02.410 ************ 2025-05-25 00:30:48.159231 | orchestrator | ok: [testbed-manager] 2025-05-25 00:30:49.081697 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:30:49.083107 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:30:49.083142 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:30:49.084754 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:30:49.085009 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:30:49.088163 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:30:49.090005 | orchestrator | 2025-05-25 00:30:49.090935 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-25 00:30:49.092025 | orchestrator | 2025-05-25 00:30:49.092996 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-25 00:30:49.093924 | orchestrator | Sunday 25 May 2025 00:30:49 +0000 (0:00:02.006) 0:00:04.416 ************ 2025-05-25 00:30:49.190617 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:30:49.191078 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:30:49.191763 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:30:49.192670 | orchestrator | 2025-05-25 00:30:49.195096 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-25 00:30:49.195125 | orchestrator | Sunday 25 May 2025 00:30:49 +0000 (0:00:00.113) 0:00:04.530 ************ 2025-05-25 00:30:49.323530 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:30:49.323724 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:30:49.324715 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:30:49.329082 | orchestrator | 2025-05-25 00:30:49.329903 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-25 00:30:49.330918 | orchestrator | Sunday 25 May 2025 00:30:49 +0000 (0:00:00.132) 0:00:04.662 ************ 2025-05-25 00:30:49.439321 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:30:49.439534 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:30:49.440043 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:30:49.440507 | orchestrator | 2025-05-25 00:30:49.443340 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-25 00:30:49.443565 | orchestrator | Sunday 25 May 2025 00:30:49 +0000 (0:00:00.115) 0:00:04.778 ************ 2025-05-25 00:30:49.563247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:30:49.564131 | orchestrator | 2025-05-25 00:30:49.565068 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-25 00:30:49.568946 | orchestrator | Sunday 25 May 2025 00:30:49 +0000 (0:00:00.123) 0:00:04.901 ************ 2025-05-25 00:30:49.983574 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:30:49.985327 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:30:49.985358 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:30:49.985795 | orchestrator | 2025-05-25 00:30:49.989157 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-25 00:30:49.989195 | orchestrator | Sunday 25 May 2025 00:30:49 +0000 (0:00:00.413) 0:00:05.315 ************ 2025-05-25 00:30:50.073831 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:30:50.074006 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:30:50.074432 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:30:50.074795 | orchestrator | 2025-05-25 00:30:50.075228 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-25 00:30:50.075684 | orchestrator | Sunday 25 May 2025 00:30:50 +0000 (0:00:00.098) 0:00:05.413 ************ 2025-05-25 00:30:50.989851 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:30:50.990774 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:30:50.990859 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:30:50.991777 | orchestrator | 2025-05-25 00:30:50.993522 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-25 00:30:50.994150 | orchestrator | Sunday 25 May 2025 00:30:50 +0000 (0:00:00.912) 0:00:06.326 ************ 2025-05-25 00:30:51.453428 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:30:51.454844 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:30:51.455813 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:30:51.457648 | orchestrator | 2025-05-25 00:30:51.458371 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-25 00:30:51.459182 | orchestrator | Sunday 25 May 2025 00:30:51 +0000 (0:00:00.464) 0:00:06.791 ************ 2025-05-25 00:30:52.471145 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:30:52.471251 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:30:52.471393 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:30:52.471643 | orchestrator | 2025-05-25 00:30:52.472002 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-25 00:30:52.474355 | orchestrator | Sunday 25 May 2025 00:30:52 +0000 (0:00:01.016) 0:00:07.807 ************ 2025-05-25 00:31:05.623540 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:31:05.625152 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:31:05.625771 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:31:05.626511 | orchestrator | 2025-05-25 00:31:05.627448 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-25 00:31:05.629720 | orchestrator | Sunday 25 May 2025 00:31:05 +0000 (0:00:13.150) 0:00:20.958 ************ 2025-05-25 00:31:05.678650 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:31:05.711747 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:31:05.712278 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:31:05.716716 | orchestrator | 2025-05-25 00:31:05.716899 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-25 00:31:05.717769 | orchestrator | Sunday 25 May 2025 00:31:05 +0000 (0:00:00.093) 0:00:21.051 ************ 2025-05-25 00:31:12.958965 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:31:12.959189 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:31:12.959672 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:31:12.960711 | orchestrator | 2025-05-25 00:31:12.962609 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-25 00:31:12.963423 | orchestrator | Sunday 25 May 2025 00:31:12 +0000 (0:00:07.244) 0:00:28.295 ************ 2025-05-25 00:31:13.404117 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:13.404335 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:13.405124 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:13.405784 | orchestrator | 2025-05-25 00:31:13.406674 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-25 00:31:13.407068 | orchestrator | Sunday 25 May 2025 00:31:13 +0000 (0:00:00.447) 0:00:28.742 ************ 2025-05-25 00:31:16.865993 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-25 00:31:16.866905 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-25 00:31:16.867631 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-25 00:31:16.868562 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-25 00:31:16.869719 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-25 00:31:16.870083 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-25 00:31:16.870829 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-25 00:31:16.871728 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-25 00:31:16.872439 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-25 00:31:16.873049 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-25 00:31:16.873641 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-25 00:31:16.874411 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-25 00:31:16.875068 | orchestrator | 2025-05-25 00:31:16.875364 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-25 00:31:16.876002 | orchestrator | Sunday 25 May 2025 00:31:16 +0000 (0:00:03.454) 0:00:32.197 ************ 2025-05-25 00:31:17.960376 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:17.961098 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:17.961751 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:17.962637 | orchestrator | 2025-05-25 00:31:17.963449 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-25 00:31:17.964127 | orchestrator | 2025-05-25 00:31:17.965108 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-25 00:31:17.965731 | orchestrator | Sunday 25 May 2025 00:31:17 +0000 (0:00:01.099) 0:00:33.297 ************ 2025-05-25 00:31:19.684787 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:22.893782 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:22.893901 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:22.893987 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:22.894365 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:22.894958 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:22.895430 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:22.896012 | orchestrator | 2025-05-25 00:31:22.896906 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:31:22.896972 | orchestrator | 2025-05-25 00:31:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:31:22.897037 | orchestrator | 2025-05-25 00:31:22 | INFO  | Please wait and do not abort execution. 2025-05-25 00:31:22.897272 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:31:22.897806 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:31:22.898150 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:31:22.898601 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:31:22.899499 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:31:22.899873 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:31:22.900346 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:31:22.900660 | orchestrator | 2025-05-25 00:31:22.901321 | orchestrator | Sunday 25 May 2025 00:31:22 +0000 (0:00:04.932) 0:00:38.229 ************ 2025-05-25 00:31:22.901347 | orchestrator | =============================================================================== 2025-05-25 00:31:22.901620 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.15s 2025-05-25 00:31:22.901983 | orchestrator | Install required packages (Debian) -------------------------------------- 7.24s 2025-05-25 00:31:22.902226 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.93s 2025-05-25 00:31:22.902996 | orchestrator | Copy fact files --------------------------------------------------------- 3.45s 2025-05-25 00:31:22.903405 | orchestrator | Create custom facts directory ------------------------------------------- 2.33s 2025-05-25 00:31:22.903432 | orchestrator | Copy fact file ---------------------------------------------------------- 2.01s 2025-05-25 00:31:22.903704 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.10s 2025-05-25 00:31:22.904358 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2025-05-25 00:31:22.904379 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.91s 2025-05-25 00:31:22.904804 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-05-25 00:31:22.905658 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-05-25 00:31:22.906110 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2025-05-25 00:31:22.906599 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.13s 2025-05-25 00:31:22.906942 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2025-05-25 00:31:22.907642 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.12s 2025-05-25 00:31:22.907956 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-05-25 00:31:22.908896 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-05-25 00:31:22.909090 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2025-05-25 00:31:23.351378 | orchestrator | + osism apply bootstrap 2025-05-25 00:31:24.734127 | orchestrator | 2025-05-25 00:31:24 | INFO  | Task b89fa14b-c04a-4513-b89c-fc7e85e40b4e (bootstrap) was prepared for execution. 2025-05-25 00:31:24.734237 | orchestrator | 2025-05-25 00:31:24 | INFO  | It takes a moment until task b89fa14b-c04a-4513-b89c-fc7e85e40b4e (bootstrap) has been started and output is visible here. 2025-05-25 00:31:27.887966 | orchestrator | 2025-05-25 00:31:27.888162 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-25 00:31:27.890430 | orchestrator | 2025-05-25 00:31:27.891394 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-25 00:31:27.893187 | orchestrator | Sunday 25 May 2025 00:31:27 +0000 (0:00:00.106) 0:00:00.106 ************ 2025-05-25 00:31:27.980477 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:28.003830 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:28.030543 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:28.138445 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:28.138545 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:28.139489 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:28.139722 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:28.140389 | orchestrator | 2025-05-25 00:31:28.140851 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-25 00:31:28.141481 | orchestrator | 2025-05-25 00:31:28.142421 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-25 00:31:28.142517 | orchestrator | Sunday 25 May 2025 00:31:28 +0000 (0:00:00.254) 0:00:00.360 ************ 2025-05-25 00:31:31.742331 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:31.742888 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:31.743626 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:31.744495 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:31.745208 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:31.745851 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:31.746407 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:31.746897 | orchestrator | 2025-05-25 00:31:31.747627 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-25 00:31:31.748225 | orchestrator | 2025-05-25 00:31:31.748712 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-25 00:31:31.749210 | orchestrator | Sunday 25 May 2025 00:31:31 +0000 (0:00:03.603) 0:00:03.964 ************ 2025-05-25 00:31:31.844887 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-25 00:31:31.845005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-25 00:31:31.845021 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-25 00:31:31.845131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:31:31.845149 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-25 00:31:31.889710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:31:31.891154 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-25 00:31:31.891187 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-25 00:31:31.891199 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-25 00:31:31.891210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:31:31.943409 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-25 00:31:31.943493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 00:31:31.943644 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-25 00:31:31.943717 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-25 00:31:31.943990 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-25 00:31:31.944245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 00:31:31.944574 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-25 00:31:31.944873 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-25 00:31:31.945203 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-25 00:31:32.189598 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:31:32.190403 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-25 00:31:32.193397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 00:31:32.193468 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-25 00:31:32.193481 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:31:32.193493 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:31:32.193504 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-25 00:31:32.193515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-25 00:31:32.194634 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-25 00:31:32.195424 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-25 00:31:32.195533 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:31:32.196393 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:31:32.197206 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-25 00:31:32.197720 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-25 00:31:32.198485 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:31:32.198509 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-25 00:31:32.199139 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-25 00:31:32.199665 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-25 00:31:32.200048 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 00:31:32.200332 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-25 00:31:32.200949 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-25 00:31:32.201425 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-25 00:31:32.202792 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 00:31:32.203520 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-25 00:31:32.204066 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-25 00:31:32.204599 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-25 00:31:32.204892 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:31:32.205285 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-25 00:31:32.205592 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 00:31:32.205973 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-25 00:31:32.206193 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:31:32.206884 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-25 00:31:32.207239 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-25 00:31:32.207545 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:31:32.209109 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-25 00:31:32.209135 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-25 00:31:32.209146 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:31:32.209158 | orchestrator | 2025-05-25 00:31:32.209617 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-25 00:31:32.209796 | orchestrator | 2025-05-25 00:31:32.210214 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-25 00:31:32.210593 | orchestrator | Sunday 25 May 2025 00:31:32 +0000 (0:00:00.446) 0:00:04.411 ************ 2025-05-25 00:31:32.264837 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:32.289006 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:32.307483 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:32.330727 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:32.397417 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:32.397598 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:32.399091 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:32.402407 | orchestrator | 2025-05-25 00:31:32.402496 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-25 00:31:32.402513 | orchestrator | Sunday 25 May 2025 00:31:32 +0000 (0:00:00.207) 0:00:04.618 ************ 2025-05-25 00:31:33.615646 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:33.616469 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:33.616503 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:33.616650 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:33.617494 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:33.617900 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:33.618361 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:33.618808 | orchestrator | 2025-05-25 00:31:33.619792 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-25 00:31:33.619816 | orchestrator | Sunday 25 May 2025 00:31:33 +0000 (0:00:01.218) 0:00:05.836 ************ 2025-05-25 00:31:34.851934 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:34.855201 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:34.855720 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:34.856708 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:34.857737 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:34.858862 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:34.860181 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:34.862515 | orchestrator | 2025-05-25 00:31:34.862950 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-25 00:31:34.864895 | orchestrator | Sunday 25 May 2025 00:31:34 +0000 (0:00:01.235) 0:00:07.072 ************ 2025-05-25 00:31:35.165245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:31:35.165379 | orchestrator | 2025-05-25 00:31:35.165439 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-25 00:31:35.165649 | orchestrator | Sunday 25 May 2025 00:31:35 +0000 (0:00:00.315) 0:00:07.387 ************ 2025-05-25 00:31:37.075284 | orchestrator | changed: [testbed-manager] 2025-05-25 00:31:37.075430 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:31:37.075913 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:31:37.079101 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:31:37.079127 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:31:37.079138 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:31:37.080709 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:31:37.080998 | orchestrator | 2025-05-25 00:31:37.081919 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-25 00:31:37.082716 | orchestrator | Sunday 25 May 2025 00:31:37 +0000 (0:00:01.907) 0:00:09.294 ************ 2025-05-25 00:31:37.162630 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:31:37.349402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:31:37.349747 | orchestrator | 2025-05-25 00:31:37.350945 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-25 00:31:37.351494 | orchestrator | Sunday 25 May 2025 00:31:37 +0000 (0:00:00.275) 0:00:09.570 ************ 2025-05-25 00:31:38.334500 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:31:38.334831 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:31:38.336274 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:31:38.336397 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:31:38.337084 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:31:38.338102 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:31:38.338612 | orchestrator | 2025-05-25 00:31:38.339592 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-25 00:31:38.340141 | orchestrator | Sunday 25 May 2025 00:31:38 +0000 (0:00:00.984) 0:00:10.555 ************ 2025-05-25 00:31:38.402005 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:31:38.926356 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:31:38.927593 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:31:38.927862 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:31:38.928608 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:31:38.929469 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:31:38.930148 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:31:38.931260 | orchestrator | 2025-05-25 00:31:38.931716 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-25 00:31:38.932058 | orchestrator | Sunday 25 May 2025 00:31:38 +0000 (0:00:00.592) 0:00:11.148 ************ 2025-05-25 00:31:39.029030 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:31:39.046181 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:31:39.069743 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:31:39.368431 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:31:39.371481 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:31:39.371530 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:31:39.371543 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:39.373367 | orchestrator | 2025-05-25 00:31:39.374138 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-25 00:31:39.375413 | orchestrator | Sunday 25 May 2025 00:31:39 +0000 (0:00:00.439) 0:00:11.587 ************ 2025-05-25 00:31:39.435506 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:31:39.457959 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:31:39.482645 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:31:39.506120 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:31:39.566556 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:31:39.567576 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:31:39.568240 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:31:39.568999 | orchestrator | 2025-05-25 00:31:39.569408 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-25 00:31:39.570188 | orchestrator | Sunday 25 May 2025 00:31:39 +0000 (0:00:00.199) 0:00:11.787 ************ 2025-05-25 00:31:39.827257 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:31:39.831191 | orchestrator | 2025-05-25 00:31:39.832541 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-25 00:31:39.832568 | orchestrator | Sunday 25 May 2025 00:31:39 +0000 (0:00:00.260) 0:00:12.047 ************ 2025-05-25 00:31:40.122518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:31:40.122727 | orchestrator | 2025-05-25 00:31:40.123609 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-25 00:31:40.124099 | orchestrator | Sunday 25 May 2025 00:31:40 +0000 (0:00:00.296) 0:00:12.344 ************ 2025-05-25 00:31:41.427699 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:41.427929 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:41.428942 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:41.429667 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:41.430789 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:41.430992 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:41.432179 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:41.432691 | orchestrator | 2025-05-25 00:31:41.433215 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-25 00:31:41.433899 | orchestrator | Sunday 25 May 2025 00:31:41 +0000 (0:00:01.302) 0:00:13.646 ************ 2025-05-25 00:31:41.516366 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:31:41.544373 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:31:41.568414 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:31:41.592866 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:31:41.643574 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:31:41.644624 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:31:41.645392 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:31:41.645694 | orchestrator | 2025-05-25 00:31:41.646105 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-25 00:31:41.646354 | orchestrator | Sunday 25 May 2025 00:31:41 +0000 (0:00:00.219) 0:00:13.866 ************ 2025-05-25 00:31:42.143822 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:42.144244 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:42.145396 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:42.146597 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:42.147077 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:42.148516 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:42.149015 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:42.149925 | orchestrator | 2025-05-25 00:31:42.151513 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-25 00:31:42.152798 | orchestrator | Sunday 25 May 2025 00:31:42 +0000 (0:00:00.498) 0:00:14.364 ************ 2025-05-25 00:31:42.247570 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:31:42.270810 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:31:42.303704 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:31:42.402371 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:31:42.402925 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:31:42.403879 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:31:42.405762 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:31:42.406554 | orchestrator | 2025-05-25 00:31:42.407334 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-25 00:31:42.408294 | orchestrator | Sunday 25 May 2025 00:31:42 +0000 (0:00:00.259) 0:00:14.624 ************ 2025-05-25 00:31:42.887436 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:31:42.888551 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:42.889642 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:31:42.890896 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:31:42.893386 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:31:42.895046 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:31:42.895081 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:31:42.895976 | orchestrator | 2025-05-25 00:31:42.897119 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-25 00:31:42.897821 | orchestrator | Sunday 25 May 2025 00:31:42 +0000 (0:00:00.484) 0:00:15.108 ************ 2025-05-25 00:31:43.917955 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:43.920086 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:31:43.920535 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:31:43.921713 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:31:43.922995 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:31:43.923792 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:31:43.924497 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:31:43.925581 | orchestrator | 2025-05-25 00:31:43.926505 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-25 00:31:43.927260 | orchestrator | Sunday 25 May 2025 00:31:43 +0000 (0:00:01.029) 0:00:16.137 ************ 2025-05-25 00:31:44.980026 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:44.980519 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:44.981154 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:44.982203 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:44.982236 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:44.982895 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:44.983238 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:44.984574 | orchestrator | 2025-05-25 00:31:44.986094 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-25 00:31:44.986136 | orchestrator | Sunday 25 May 2025 00:31:44 +0000 (0:00:01.063) 0:00:17.200 ************ 2025-05-25 00:31:45.278065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:31:45.278676 | orchestrator | 2025-05-25 00:31:45.282649 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-25 00:31:45.282844 | orchestrator | Sunday 25 May 2025 00:31:45 +0000 (0:00:00.296) 0:00:17.497 ************ 2025-05-25 00:31:45.349507 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:31:46.676959 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:31:46.677278 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:31:46.680535 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:31:46.681166 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:31:46.681297 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:31:46.682223 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:31:46.682626 | orchestrator | 2025-05-25 00:31:46.683401 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-25 00:31:46.684425 | orchestrator | Sunday 25 May 2025 00:31:46 +0000 (0:00:01.399) 0:00:18.897 ************ 2025-05-25 00:31:46.744702 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:46.769606 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:46.800265 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:46.817299 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:46.881498 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:46.881997 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:46.883299 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:46.885619 | orchestrator | 2025-05-25 00:31:46.887076 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-25 00:31:46.887742 | orchestrator | Sunday 25 May 2025 00:31:46 +0000 (0:00:00.206) 0:00:19.103 ************ 2025-05-25 00:31:46.955217 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:47.001469 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:47.027122 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:47.089332 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:47.089904 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:47.090894 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:47.094384 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:47.094476 | orchestrator | 2025-05-25 00:31:47.094492 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-25 00:31:47.094504 | orchestrator | Sunday 25 May 2025 00:31:47 +0000 (0:00:00.207) 0:00:19.311 ************ 2025-05-25 00:31:47.157555 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:47.184034 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:47.214187 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:47.246724 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:47.308053 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:47.309352 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:47.309862 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:47.310721 | orchestrator | 2025-05-25 00:31:47.311062 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-25 00:31:47.311823 | orchestrator | Sunday 25 May 2025 00:31:47 +0000 (0:00:00.218) 0:00:19.530 ************ 2025-05-25 00:31:47.606361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:31:47.606745 | orchestrator | 2025-05-25 00:31:47.607485 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-25 00:31:47.608380 | orchestrator | Sunday 25 May 2025 00:31:47 +0000 (0:00:00.297) 0:00:19.827 ************ 2025-05-25 00:31:48.127547 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:48.128286 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:48.128753 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:48.129965 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:48.130890 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:48.131955 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:48.132828 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:48.133525 | orchestrator | 2025-05-25 00:31:48.134679 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-25 00:31:48.135534 | orchestrator | Sunday 25 May 2025 00:31:48 +0000 (0:00:00.521) 0:00:20.349 ************ 2025-05-25 00:31:48.199931 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:31:48.227858 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:31:48.247823 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:31:48.285530 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:31:48.358633 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:31:48.359032 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:31:48.360132 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:31:48.361733 | orchestrator | 2025-05-25 00:31:48.362162 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-25 00:31:48.362944 | orchestrator | Sunday 25 May 2025 00:31:48 +0000 (0:00:00.230) 0:00:20.579 ************ 2025-05-25 00:31:49.361396 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:49.364534 | orchestrator | changed: [testbed-manager] 2025-05-25 00:31:49.364810 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:49.365953 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:49.368490 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:31:49.371122 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:31:49.371837 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:31:49.373410 | orchestrator | 2025-05-25 00:31:49.373808 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-25 00:31:49.374242 | orchestrator | Sunday 25 May 2025 00:31:49 +0000 (0:00:01.001) 0:00:21.580 ************ 2025-05-25 00:31:49.910068 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:49.910279 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:49.910894 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:49.911705 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:49.913977 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:31:49.914750 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:31:49.915072 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:31:49.915541 | orchestrator | 2025-05-25 00:31:49.915791 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-25 00:31:49.916222 | orchestrator | Sunday 25 May 2025 00:31:49 +0000 (0:00:00.550) 0:00:22.131 ************ 2025-05-25 00:31:51.067202 | orchestrator | ok: [testbed-manager] 2025-05-25 00:31:51.067360 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:31:51.068515 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:31:51.069624 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:31:51.071108 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:31:51.073538 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:31:51.073585 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:31:51.073598 | orchestrator | 2025-05-25 00:31:51.074600 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-25 00:31:51.074842 | orchestrator | Sunday 25 May 2025 00:31:51 +0000 (0:00:01.155) 0:00:23.286 ************ 2025-05-25 00:32:05.005302 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:05.006184 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:05.006220 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:05.006237 | orchestrator | changed: [testbed-manager] 2025-05-25 00:32:05.006250 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:32:05.006262 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:32:05.006506 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:32:05.008692 | orchestrator | 2025-05-25 00:32:05.008797 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-25 00:32:05.009479 | orchestrator | Sunday 25 May 2025 00:32:04 +0000 (0:00:13.936) 0:00:37.222 ************ 2025-05-25 00:32:05.133031 | orchestrator | ok: [testbed-manager] 2025-05-25 00:32:05.172774 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:05.201172 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:05.263509 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:05.267168 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:32:05.267215 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:32:05.267229 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:32:05.267241 | orchestrator | 2025-05-25 00:32:05.267254 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-25 00:32:05.267299 | orchestrator | Sunday 25 May 2025 00:32:05 +0000 (0:00:00.262) 0:00:37.485 ************ 2025-05-25 00:32:05.338599 | orchestrator | ok: [testbed-manager] 2025-05-25 00:32:05.363724 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:05.388842 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:05.413523 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:05.468994 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:32:05.470126 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:32:05.471202 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:32:05.472493 | orchestrator | 2025-05-25 00:32:05.473105 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-25 00:32:05.474127 | orchestrator | Sunday 25 May 2025 00:32:05 +0000 (0:00:00.204) 0:00:37.690 ************ 2025-05-25 00:32:05.554310 | orchestrator | ok: [testbed-manager] 2025-05-25 00:32:05.578219 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:05.605669 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:05.631457 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:05.688538 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:32:05.689271 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:32:05.689733 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:32:05.690458 | orchestrator | 2025-05-25 00:32:05.690713 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-25 00:32:05.691197 | orchestrator | Sunday 25 May 2025 00:32:05 +0000 (0:00:00.220) 0:00:37.911 ************ 2025-05-25 00:32:05.976807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:32:05.977690 | orchestrator | 2025-05-25 00:32:05.981195 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-25 00:32:05.981221 | orchestrator | Sunday 25 May 2025 00:32:05 +0000 (0:00:00.286) 0:00:38.198 ************ 2025-05-25 00:32:07.518701 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:07.519088 | orchestrator | ok: [testbed-manager] 2025-05-25 00:32:07.519936 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:07.520131 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:07.520923 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:32:07.521097 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:32:07.521744 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:32:07.525566 | orchestrator | 2025-05-25 00:32:07.527033 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-25 00:32:07.528282 | orchestrator | Sunday 25 May 2025 00:32:07 +0000 (0:00:01.541) 0:00:39.739 ************ 2025-05-25 00:32:08.665979 | orchestrator | changed: [testbed-manager] 2025-05-25 00:32:08.666137 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:32:08.667142 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:32:08.668290 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:32:08.669235 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:32:08.669951 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:32:08.670952 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:32:08.671929 | orchestrator | 2025-05-25 00:32:08.672147 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-25 00:32:08.673168 | orchestrator | Sunday 25 May 2025 00:32:08 +0000 (0:00:01.139) 0:00:40.878 ************ 2025-05-25 00:32:09.448740 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:09.449662 | orchestrator | ok: [testbed-manager] 2025-05-25 00:32:09.449847 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:09.451251 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:09.453125 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:32:09.453181 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:32:09.455015 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:32:09.455976 | orchestrator | 2025-05-25 00:32:09.456629 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-25 00:32:09.457616 | orchestrator | Sunday 25 May 2025 00:32:09 +0000 (0:00:00.791) 0:00:41.669 ************ 2025-05-25 00:32:09.736564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:32:09.736812 | orchestrator | 2025-05-25 00:32:09.737187 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-25 00:32:09.738348 | orchestrator | Sunday 25 May 2025 00:32:09 +0000 (0:00:00.286) 0:00:41.956 ************ 2025-05-25 00:32:10.753483 | orchestrator | changed: [testbed-manager] 2025-05-25 00:32:10.754086 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:32:10.758477 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:32:10.758543 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:32:10.760811 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:32:10.760874 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:32:10.760896 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:32:10.761170 | orchestrator | 2025-05-25 00:32:10.763008 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-25 00:32:10.763593 | orchestrator | Sunday 25 May 2025 00:32:10 +0000 (0:00:01.016) 0:00:42.973 ************ 2025-05-25 00:32:10.851276 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:32:10.874573 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:32:10.902217 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:32:11.033484 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:32:11.034184 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:32:11.038297 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:32:11.038357 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:32:11.038371 | orchestrator | 2025-05-25 00:32:11.038384 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-25 00:32:11.038396 | orchestrator | Sunday 25 May 2025 00:32:11 +0000 (0:00:00.280) 0:00:43.254 ************ 2025-05-25 00:32:22.771468 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:32:22.771625 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:32:22.771644 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:32:22.771656 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:32:22.771667 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:32:22.771678 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:32:22.771689 | orchestrator | changed: [testbed-manager] 2025-05-25 00:32:22.771700 | orchestrator | 2025-05-25 00:32:22.771779 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-25 00:32:22.772294 | orchestrator | Sunday 25 May 2025 00:32:22 +0000 (0:00:11.733) 0:00:54.988 ************ 2025-05-25 00:32:23.659888 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:23.660094 | orchestrator | ok: [testbed-manager] 2025-05-25 00:32:23.661043 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:32:23.662876 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:32:23.663484 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:23.664451 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:32:23.665194 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:23.665595 | orchestrator | 2025-05-25 00:32:23.666210 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-25 00:32:23.667242 | orchestrator | Sunday 25 May 2025 00:32:23 +0000 (0:00:00.891) 0:00:55.880 ************ 2025-05-25 00:32:24.517874 | orchestrator | ok: [testbed-manager] 2025-05-25 00:32:24.517981 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:24.518873 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:24.519565 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:24.520896 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:32:24.521217 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:32:24.522082 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:32:24.522955 | orchestrator | 2025-05-25 00:32:24.523727 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-25 00:32:24.524433 | orchestrator | Sunday 25 May 2025 00:32:24 +0000 (0:00:00.856) 0:00:56.737 ************ 2025-05-25 00:32:24.586651 | orchestrator | ok: [testbed-manager] 2025-05-25 00:32:24.615426 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:24.636821 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:24.664770 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:24.727127 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:32:24.728746 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:32:24.729558 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:32:24.730409 | orchestrator | 2025-05-25 00:32:24.731437 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-25 00:32:24.732251 | orchestrator | Sunday 25 May 2025 00:32:24 +0000 (0:00:00.211) 0:00:56.948 ************ 2025-05-25 00:32:24.793923 | orchestrator | ok: [testbed-manager] 2025-05-25 00:32:24.819583 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:24.846153 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:24.867405 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:24.934783 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:32:24.936682 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:32:24.938770 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:32:24.939833 | orchestrator | 2025-05-25 00:32:24.941213 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-25 00:32:24.942690 | orchestrator | Sunday 25 May 2025 00:32:24 +0000 (0:00:00.208) 0:00:57.156 ************ 2025-05-25 00:32:25.229693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:32:25.230875 | orchestrator | 2025-05-25 00:32:25.232202 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-25 00:32:25.233676 | orchestrator | Sunday 25 May 2025 00:32:25 +0000 (0:00:00.293) 0:00:57.450 ************ 2025-05-25 00:32:26.774676 | orchestrator | ok: [testbed-manager] 2025-05-25 00:32:26.775834 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:26.776869 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:26.778162 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:32:26.778602 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:32:26.779766 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:26.780133 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:32:26.780919 | orchestrator | 2025-05-25 00:32:26.784254 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-25 00:32:26.784281 | orchestrator | Sunday 25 May 2025 00:32:26 +0000 (0:00:01.544) 0:00:58.994 ************ 2025-05-25 00:32:27.330607 | orchestrator | changed: [testbed-manager] 2025-05-25 00:32:27.331182 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:32:27.332957 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:32:27.333602 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:32:27.334612 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:32:27.335312 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:32:27.336000 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:32:27.336605 | orchestrator | 2025-05-25 00:32:27.337637 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-25 00:32:27.337847 | orchestrator | Sunday 25 May 2025 00:32:27 +0000 (0:00:00.556) 0:00:59.550 ************ 2025-05-25 00:32:27.401898 | orchestrator | ok: [testbed-manager] 2025-05-25 00:32:27.426642 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:27.457096 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:27.477831 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:27.543956 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:32:27.545008 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:32:27.546506 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:32:27.547154 | orchestrator | 2025-05-25 00:32:27.548345 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-25 00:32:27.549554 | orchestrator | Sunday 25 May 2025 00:32:27 +0000 (0:00:00.214) 0:00:59.765 ************ 2025-05-25 00:32:28.578927 | orchestrator | ok: [testbed-manager] 2025-05-25 00:32:28.580195 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:28.581661 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:28.582906 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:28.583434 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:32:28.584131 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:32:28.585139 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:32:28.585501 | orchestrator | 2025-05-25 00:32:28.585902 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-25 00:32:28.586441 | orchestrator | Sunday 25 May 2025 00:32:28 +0000 (0:00:01.033) 0:01:00.798 ************ 2025-05-25 00:32:30.164678 | orchestrator | changed: [testbed-manager] 2025-05-25 00:32:30.165601 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:32:30.166565 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:32:30.167709 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:32:30.168463 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:32:30.168874 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:32:30.169551 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:32:30.170392 | orchestrator | 2025-05-25 00:32:30.170959 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-25 00:32:30.171266 | orchestrator | Sunday 25 May 2025 00:32:30 +0000 (0:00:01.584) 0:01:02.383 ************ 2025-05-25 00:32:32.399479 | orchestrator | ok: [testbed-manager] 2025-05-25 00:32:32.399643 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:32:32.400614 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:32:32.402757 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:32:32.404635 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:32:32.405444 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:32:32.406131 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:32:32.406433 | orchestrator | 2025-05-25 00:32:32.406882 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-25 00:32:32.407058 | orchestrator | Sunday 25 May 2025 00:32:32 +0000 (0:00:02.235) 0:01:04.618 ************ 2025-05-25 00:33:07.372882 | orchestrator | ok: [testbed-manager] 2025-05-25 00:33:07.373019 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:33:07.373038 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:33:07.373052 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:33:07.373066 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:33:07.373080 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:33:07.373179 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:33:07.373679 | orchestrator | 2025-05-25 00:33:07.373756 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-25 00:33:07.374064 | orchestrator | Sunday 25 May 2025 00:33:07 +0000 (0:00:34.968) 0:01:39.587 ************ 2025-05-25 00:34:29.817408 | orchestrator | changed: [testbed-manager] 2025-05-25 00:34:29.817545 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:34:29.817563 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:34:29.819109 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:34:29.820303 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:34:29.821164 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:34:29.821529 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:34:29.822287 | orchestrator | 2025-05-25 00:34:29.822590 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-25 00:34:29.823047 | orchestrator | Sunday 25 May 2025 00:34:29 +0000 (0:01:22.446) 0:03:02.034 ************ 2025-05-25 00:34:31.396084 | orchestrator | ok: [testbed-manager] 2025-05-25 00:34:31.396244 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:34:31.397608 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:34:31.401230 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:34:31.402846 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:34:31.403792 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:34:31.404715 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:34:31.405234 | orchestrator | 2025-05-25 00:34:31.406098 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-25 00:34:31.406815 | orchestrator | Sunday 25 May 2025 00:34:31 +0000 (0:00:01.581) 0:03:03.616 ************ 2025-05-25 00:34:42.658491 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:34:42.658639 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:34:42.658665 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:34:42.659928 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:34:42.661040 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:34:42.661852 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:34:42.661882 | orchestrator | changed: [testbed-manager] 2025-05-25 00:34:42.662208 | orchestrator | 2025-05-25 00:34:42.662710 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-25 00:34:42.662940 | orchestrator | Sunday 25 May 2025 00:34:42 +0000 (0:00:11.254) 0:03:14.870 ************ 2025-05-25 00:34:43.019085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-25 00:34:43.019732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-25 00:34:43.020043 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-25 00:34:43.020894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-25 00:34:43.021153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-25 00:34:43.021864 | orchestrator | 2025-05-25 00:34:43.025283 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-25 00:34:43.025566 | orchestrator | Sunday 25 May 2025 00:34:43 +0000 (0:00:00.370) 0:03:15.240 ************ 2025-05-25 00:34:43.077855 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-25 00:34:43.103901 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:34:43.103951 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-25 00:34:43.130096 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-25 00:34:43.130906 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:34:43.159864 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-25 00:34:43.160269 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:34:43.189902 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:34:43.722700 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-25 00:34:43.722838 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-25 00:34:43.722854 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-25 00:34:43.722895 | orchestrator | 2025-05-25 00:34:43.722982 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-25 00:34:43.723354 | orchestrator | Sunday 25 May 2025 00:34:43 +0000 (0:00:00.701) 0:03:15.941 ************ 2025-05-25 00:34:43.791316 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-25 00:34:43.791445 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-25 00:34:43.791459 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-25 00:34:43.791945 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-25 00:34:43.793112 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-25 00:34:43.793882 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-25 00:34:43.817871 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-25 00:34:43.818011 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-25 00:34:43.818079 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-25 00:34:43.818287 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-25 00:34:43.819554 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-25 00:34:43.819733 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-25 00:34:43.874939 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-25 00:34:43.875675 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-25 00:34:43.876432 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-25 00:34:43.877460 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-25 00:34:43.878142 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-25 00:34:43.879087 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-25 00:34:43.881082 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-25 00:34:43.882117 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-25 00:34:43.882522 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-25 00:34:43.883188 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-25 00:34:43.885960 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-25 00:34:43.886006 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-25 00:34:43.886062 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-25 00:34:43.886333 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-25 00:34:43.886805 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-25 00:34:43.887573 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-25 00:34:43.888158 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-25 00:34:43.888910 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-25 00:34:43.889144 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-25 00:34:43.890185 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-25 00:34:43.890667 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-25 00:34:43.891115 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-25 00:34:43.891662 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-25 00:34:43.893346 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-25 00:34:43.893405 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-25 00:34:43.894102 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-25 00:34:43.922604 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-25 00:34:43.923176 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:34:43.924346 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-25 00:34:43.943767 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:34:47.510529 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:34:47.510636 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:34:47.510718 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-25 00:34:47.512131 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-25 00:34:47.514295 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-25 00:34:47.514583 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-25 00:34:47.516127 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-25 00:34:47.517232 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-25 00:34:47.518430 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-25 00:34:47.519875 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-25 00:34:47.520734 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-25 00:34:47.521525 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-25 00:34:47.522717 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-25 00:34:47.523331 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-25 00:34:47.524205 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-25 00:34:47.524682 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-25 00:34:47.525011 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-25 00:34:47.525689 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-25 00:34:47.526091 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-25 00:34:47.526874 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-25 00:34:47.527583 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-25 00:34:47.527981 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-25 00:34:47.528639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-25 00:34:47.529029 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-25 00:34:47.529863 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-25 00:34:47.530598 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-25 00:34:47.530913 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-25 00:34:47.531554 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-25 00:34:47.532696 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-25 00:34:47.532911 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-25 00:34:47.533479 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-25 00:34:47.534116 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-25 00:34:47.535387 | orchestrator | 2025-05-25 00:34:47.535583 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-25 00:34:47.536302 | orchestrator | Sunday 25 May 2025 00:34:47 +0000 (0:00:03.787) 0:03:19.729 ************ 2025-05-25 00:34:48.078365 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 00:34:48.079113 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 00:34:48.080838 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 00:34:48.082229 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 00:34:48.083057 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 00:34:48.083883 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 00:34:48.084491 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-25 00:34:48.085831 | orchestrator | 2025-05-25 00:34:48.086676 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-25 00:34:48.088186 | orchestrator | Sunday 25 May 2025 00:34:48 +0000 (0:00:00.569) 0:03:20.298 ************ 2025-05-25 00:34:48.142769 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-25 00:34:48.171547 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:34:48.248234 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-25 00:34:48.613318 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-25 00:34:48.614089 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:34:48.615095 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:34:48.615995 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-25 00:34:48.617081 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:34:48.618604 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-25 00:34:48.619407 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-25 00:34:48.620116 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-25 00:34:48.620552 | orchestrator | 2025-05-25 00:34:48.621515 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-25 00:34:48.621968 | orchestrator | Sunday 25 May 2025 00:34:48 +0000 (0:00:00.532) 0:03:20.830 ************ 2025-05-25 00:34:48.662418 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-25 00:34:48.688272 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:34:48.761644 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-25 00:34:50.155627 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-25 00:34:50.158609 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:34:50.158708 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:34:50.158723 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-25 00:34:50.158736 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:34:50.159249 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-25 00:34:50.160069 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-25 00:34:50.160670 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-25 00:34:50.161590 | orchestrator | 2025-05-25 00:34:50.162092 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-25 00:34:50.162564 | orchestrator | Sunday 25 May 2025 00:34:50 +0000 (0:00:01.543) 0:03:22.374 ************ 2025-05-25 00:34:50.234645 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:34:50.262669 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:34:50.286488 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:34:50.307911 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:34:50.417532 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:34:50.419259 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:34:50.422105 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:34:50.422148 | orchestrator | 2025-05-25 00:34:50.422161 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-25 00:34:50.423032 | orchestrator | Sunday 25 May 2025 00:34:50 +0000 (0:00:00.263) 0:03:22.638 ************ 2025-05-25 00:34:56.222263 | orchestrator | ok: [testbed-manager] 2025-05-25 00:34:56.223275 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:34:56.224011 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:34:56.224570 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:34:56.225111 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:34:56.225525 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:34:56.226242 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:34:56.226781 | orchestrator | 2025-05-25 00:34:56.227575 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-25 00:34:56.227855 | orchestrator | Sunday 25 May 2025 00:34:56 +0000 (0:00:05.805) 0:03:28.444 ************ 2025-05-25 00:34:56.293856 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-25 00:34:56.330659 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:34:56.331220 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-25 00:34:56.332033 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-25 00:34:56.361554 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:34:56.409636 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-25 00:34:56.409902 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:34:56.410860 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-25 00:34:56.446897 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:34:56.447487 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-25 00:34:56.520650 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:34:56.520742 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:34:56.520756 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-25 00:34:56.521183 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:34:56.522216 | orchestrator | 2025-05-25 00:34:56.523773 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-25 00:34:56.523815 | orchestrator | Sunday 25 May 2025 00:34:56 +0000 (0:00:00.298) 0:03:28.742 ************ 2025-05-25 00:34:57.505150 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-25 00:34:57.506589 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-25 00:34:57.506627 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-25 00:34:57.506982 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-25 00:34:57.508026 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-25 00:34:57.508938 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-25 00:34:57.509714 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-25 00:34:57.510424 | orchestrator | 2025-05-25 00:34:57.511304 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-25 00:34:57.511956 | orchestrator | Sunday 25 May 2025 00:34:57 +0000 (0:00:00.982) 0:03:29.725 ************ 2025-05-25 00:34:57.919038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:34:57.919454 | orchestrator | 2025-05-25 00:34:57.922654 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-25 00:34:57.923479 | orchestrator | Sunday 25 May 2025 00:34:57 +0000 (0:00:00.414) 0:03:30.139 ************ 2025-05-25 00:34:59.184942 | orchestrator | ok: [testbed-manager] 2025-05-25 00:34:59.186154 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:34:59.187613 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:34:59.188988 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:34:59.189954 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:34:59.190507 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:34:59.191266 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:34:59.192010 | orchestrator | 2025-05-25 00:34:59.192741 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-25 00:34:59.193524 | orchestrator | Sunday 25 May 2025 00:34:59 +0000 (0:00:01.264) 0:03:31.404 ************ 2025-05-25 00:34:59.738345 | orchestrator | ok: [testbed-manager] 2025-05-25 00:34:59.739271 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:34:59.739584 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:34:59.740621 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:34:59.741532 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:34:59.742108 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:34:59.743435 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:34:59.744055 | orchestrator | 2025-05-25 00:34:59.744680 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-25 00:34:59.745421 | orchestrator | Sunday 25 May 2025 00:34:59 +0000 (0:00:00.555) 0:03:31.959 ************ 2025-05-25 00:35:00.343675 | orchestrator | changed: [testbed-manager] 2025-05-25 00:35:00.343810 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:35:00.347043 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:35:00.347069 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:35:00.347081 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:35:00.347712 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:35:00.348447 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:35:00.348901 | orchestrator | 2025-05-25 00:35:00.349689 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-25 00:35:00.350169 | orchestrator | Sunday 25 May 2025 00:35:00 +0000 (0:00:00.603) 0:03:32.563 ************ 2025-05-25 00:35:00.877859 | orchestrator | ok: [testbed-manager] 2025-05-25 00:35:00.878616 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:35:00.879558 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:35:00.880037 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:35:00.880682 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:35:00.881082 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:35:00.881604 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:35:00.882133 | orchestrator | 2025-05-25 00:35:00.882681 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-25 00:35:00.883172 | orchestrator | Sunday 25 May 2025 00:35:00 +0000 (0:00:00.535) 0:03:33.098 ************ 2025-05-25 00:35:01.828084 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748131431.1485682, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.828418 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748131462.799923, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.828448 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748131460.1926863, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.829943 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748131474.4279125, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.832646 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748131473.2682881, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.832673 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748131467.9942007, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.832686 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748131462.8944163, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.832713 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748131462.903551, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.832848 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748131383.515107, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.833784 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748131380.2440138, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.834908 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748131393.2636728, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.835009 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748131392.3888445, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.836119 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748131387.3015175, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.836342 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748131381.6488361, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 00:35:01.837299 | orchestrator | 2025-05-25 00:35:01.837524 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-25 00:35:01.838208 | orchestrator | Sunday 25 May 2025 00:35:01 +0000 (0:00:00.950) 0:03:34.049 ************ 2025-05-25 00:35:02.903465 | orchestrator | changed: [testbed-manager] 2025-05-25 00:35:02.906198 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:35:02.906231 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:35:02.906244 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:35:02.906255 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:35:02.906582 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:35:02.907469 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:35:02.908403 | orchestrator | 2025-05-25 00:35:02.908782 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-25 00:35:02.909218 | orchestrator | Sunday 25 May 2025 00:35:02 +0000 (0:00:01.072) 0:03:35.122 ************ 2025-05-25 00:35:03.994249 | orchestrator | changed: [testbed-manager] 2025-05-25 00:35:03.994440 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:35:03.995158 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:35:03.996000 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:35:03.997149 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:35:03.997962 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:35:03.998880 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:35:03.999435 | orchestrator | 2025-05-25 00:35:04.000057 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-25 00:35:04.000463 | orchestrator | Sunday 25 May 2025 00:35:03 +0000 (0:00:01.090) 0:03:36.212 ************ 2025-05-25 00:35:04.083415 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:35:04.134370 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:35:04.169108 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:35:04.201730 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:35:04.264291 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:35:04.264951 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:35:04.265895 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:35:04.267124 | orchestrator | 2025-05-25 00:35:04.267149 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-25 00:35:04.267523 | orchestrator | Sunday 25 May 2025 00:35:04 +0000 (0:00:00.273) 0:03:36.485 ************ 2025-05-25 00:35:04.974851 | orchestrator | ok: [testbed-manager] 2025-05-25 00:35:04.974975 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:35:04.975319 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:35:04.979411 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:35:04.980825 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:35:04.981325 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:35:04.981980 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:35:04.982986 | orchestrator | 2025-05-25 00:35:04.983606 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-25 00:35:04.983956 | orchestrator | Sunday 25 May 2025 00:35:04 +0000 (0:00:00.709) 0:03:37.195 ************ 2025-05-25 00:35:05.331512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:35:05.331716 | orchestrator | 2025-05-25 00:35:05.332145 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-25 00:35:05.332768 | orchestrator | Sunday 25 May 2025 00:35:05 +0000 (0:00:00.355) 0:03:37.550 ************ 2025-05-25 00:35:12.862191 | orchestrator | ok: [testbed-manager] 2025-05-25 00:35:12.866556 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:35:12.868671 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:35:12.869042 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:35:12.869436 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:35:12.869846 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:35:12.870297 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:35:12.871249 | orchestrator | 2025-05-25 00:35:12.871277 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-25 00:35:12.871406 | orchestrator | Sunday 25 May 2025 00:35:12 +0000 (0:00:07.530) 0:03:45.080 ************ 2025-05-25 00:35:14.020254 | orchestrator | ok: [testbed-manager] 2025-05-25 00:35:14.023259 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:35:14.023309 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:35:14.023327 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:35:14.024659 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:35:14.024698 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:35:14.025288 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:35:14.026352 | orchestrator | 2025-05-25 00:35:14.026792 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-25 00:35:14.027223 | orchestrator | Sunday 25 May 2025 00:35:14 +0000 (0:00:01.159) 0:03:46.240 ************ 2025-05-25 00:35:15.069365 | orchestrator | ok: [testbed-manager] 2025-05-25 00:35:15.069588 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:35:15.070912 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:35:15.071598 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:35:15.072491 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:35:15.072676 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:35:15.074097 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:35:15.074121 | orchestrator | 2025-05-25 00:35:15.075260 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-25 00:35:15.075708 | orchestrator | Sunday 25 May 2025 00:35:15 +0000 (0:00:01.047) 0:03:47.288 ************ 2025-05-25 00:35:15.451446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:35:15.451658 | orchestrator | 2025-05-25 00:35:15.451843 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-25 00:35:15.452603 | orchestrator | Sunday 25 May 2025 00:35:15 +0000 (0:00:00.384) 0:03:47.672 ************ 2025-05-25 00:35:23.598105 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:35:23.599204 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:35:23.600627 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:35:23.602096 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:35:23.602437 | orchestrator | changed: [testbed-manager] 2025-05-25 00:35:23.603632 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:35:23.604568 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:35:23.605556 | orchestrator | 2025-05-25 00:35:23.606651 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-25 00:35:23.607014 | orchestrator | Sunday 25 May 2025 00:35:23 +0000 (0:00:08.146) 0:03:55.819 ************ 2025-05-25 00:35:24.336598 | orchestrator | changed: [testbed-manager] 2025-05-25 00:35:24.336761 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:35:24.337541 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:35:24.338523 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:35:24.339292 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:35:24.339996 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:35:24.340501 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:35:24.340899 | orchestrator | 2025-05-25 00:35:24.341357 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-25 00:35:24.341984 | orchestrator | Sunday 25 May 2025 00:35:24 +0000 (0:00:00.736) 0:03:56.556 ************ 2025-05-25 00:35:25.425343 | orchestrator | changed: [testbed-manager] 2025-05-25 00:35:25.428867 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:35:25.428903 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:35:25.428962 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:35:25.429763 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:35:25.430587 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:35:25.431116 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:35:25.432424 | orchestrator | 2025-05-25 00:35:25.432596 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-25 00:35:25.433327 | orchestrator | Sunday 25 May 2025 00:35:25 +0000 (0:00:01.089) 0:03:57.645 ************ 2025-05-25 00:35:26.414094 | orchestrator | changed: [testbed-manager] 2025-05-25 00:35:26.414197 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:35:26.418257 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:35:26.418617 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:35:26.419122 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:35:26.419600 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:35:26.420672 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:35:26.421485 | orchestrator | 2025-05-25 00:35:26.421716 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-25 00:35:26.421950 | orchestrator | Sunday 25 May 2025 00:35:26 +0000 (0:00:00.988) 0:03:58.634 ************ 2025-05-25 00:35:26.524964 | orchestrator | ok: [testbed-manager] 2025-05-25 00:35:26.565084 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:35:26.610148 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:35:26.654847 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:35:26.718167 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:35:26.718300 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:35:26.718999 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:35:26.720516 | orchestrator | 2025-05-25 00:35:26.722320 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-25 00:35:26.723368 | orchestrator | Sunday 25 May 2025 00:35:26 +0000 (0:00:00.306) 0:03:58.940 ************ 2025-05-25 00:35:26.839334 | orchestrator | ok: [testbed-manager] 2025-05-25 00:35:26.874132 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:35:26.906270 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:35:26.943720 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:35:27.035501 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:35:27.036195 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:35:27.037052 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:35:27.038185 | orchestrator | 2025-05-25 00:35:27.039201 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-25 00:35:27.040306 | orchestrator | Sunday 25 May 2025 00:35:27 +0000 (0:00:00.315) 0:03:59.256 ************ 2025-05-25 00:35:27.138142 | orchestrator | ok: [testbed-manager] 2025-05-25 00:35:27.186906 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:35:27.218913 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:35:27.257016 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:35:27.331991 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:35:27.332567 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:35:27.333017 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:35:27.333861 | orchestrator | 2025-05-25 00:35:27.335127 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-25 00:35:27.335360 | orchestrator | Sunday 25 May 2025 00:35:27 +0000 (0:00:00.297) 0:03:59.553 ************ 2025-05-25 00:35:33.220014 | orchestrator | ok: [testbed-manager] 2025-05-25 00:35:33.220573 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:35:33.221239 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:35:33.221698 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:35:33.222697 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:35:33.224797 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:35:33.225770 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:35:33.226134 | orchestrator | 2025-05-25 00:35:33.226939 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-25 00:35:33.227651 | orchestrator | Sunday 25 May 2025 00:35:33 +0000 (0:00:05.887) 0:04:05.441 ************ 2025-05-25 00:35:33.602500 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:35:33.603125 | orchestrator | 2025-05-25 00:35:33.603893 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-25 00:35:33.604641 | orchestrator | Sunday 25 May 2025 00:35:33 +0000 (0:00:00.381) 0:04:05.822 ************ 2025-05-25 00:35:33.678325 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-25 00:35:33.679266 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-25 00:35:33.679320 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-25 00:35:33.716859 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-25 00:35:33.717984 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:35:33.768790 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-25 00:35:33.769144 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:35:33.769747 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-25 00:35:33.771037 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-25 00:35:33.806242 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:35:33.806585 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-25 00:35:33.808530 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-25 00:35:33.809264 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-25 00:35:33.838377 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:35:33.923811 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-25 00:35:33.923950 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:35:33.924085 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-25 00:35:33.924495 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:35:33.925272 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-25 00:35:33.926105 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-25 00:35:33.926821 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:35:33.928037 | orchestrator | 2025-05-25 00:35:33.928863 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-25 00:35:33.929893 | orchestrator | Sunday 25 May 2025 00:35:33 +0000 (0:00:00.322) 0:04:06.144 ************ 2025-05-25 00:35:34.281498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:35:34.281751 | orchestrator | 2025-05-25 00:35:34.282897 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-25 00:35:34.286315 | orchestrator | Sunday 25 May 2025 00:35:34 +0000 (0:00:00.357) 0:04:06.502 ************ 2025-05-25 00:35:34.355307 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-25 00:35:34.391309 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:35:34.391662 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-25 00:35:34.392219 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-25 00:35:34.432479 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:35:34.433018 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-25 00:35:34.465994 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:35:34.517015 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:35:34.517527 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-25 00:35:34.518202 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-25 00:35:34.581032 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:35:34.581745 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:35:34.582598 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-25 00:35:34.584335 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:35:34.584987 | orchestrator | 2025-05-25 00:35:34.585483 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-25 00:35:34.585968 | orchestrator | Sunday 25 May 2025 00:35:34 +0000 (0:00:00.300) 0:04:06.802 ************ 2025-05-25 00:35:34.964594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:35:34.965007 | orchestrator | 2025-05-25 00:35:34.965742 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-25 00:35:34.966502 | orchestrator | Sunday 25 May 2025 00:35:34 +0000 (0:00:00.381) 0:04:07.184 ************ 2025-05-25 00:36:08.090612 | orchestrator | changed: [testbed-manager] 2025-05-25 00:36:08.090751 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:36:08.090923 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:36:08.090943 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:36:08.090956 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:36:08.091065 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:36:08.091213 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:36:08.091872 | orchestrator | 2025-05-25 00:36:08.092689 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-25 00:36:08.093879 | orchestrator | Sunday 25 May 2025 00:36:08 +0000 (0:00:33.119) 0:04:40.303 ************ 2025-05-25 00:36:16.012898 | orchestrator | changed: [testbed-manager] 2025-05-25 00:36:16.013017 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:36:16.013970 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:36:16.015668 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:36:16.016636 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:36:16.017088 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:36:16.018003 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:36:16.019240 | orchestrator | 2025-05-25 00:36:16.020549 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-25 00:36:16.021317 | orchestrator | Sunday 25 May 2025 00:36:16 +0000 (0:00:07.927) 0:04:48.230 ************ 2025-05-25 00:36:23.316177 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:36:23.316468 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:36:23.316917 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:36:23.318320 | orchestrator | changed: [testbed-manager] 2025-05-25 00:36:23.319652 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:36:23.320802 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:36:23.321388 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:36:23.322925 | orchestrator | 2025-05-25 00:36:23.323537 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-25 00:36:23.324303 | orchestrator | Sunday 25 May 2025 00:36:23 +0000 (0:00:07.305) 0:04:55.535 ************ 2025-05-25 00:36:24.929942 | orchestrator | ok: [testbed-manager] 2025-05-25 00:36:24.930102 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:36:24.930721 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:36:24.931628 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:36:24.932746 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:36:24.933897 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:36:24.934915 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:36:24.936096 | orchestrator | 2025-05-25 00:36:24.937292 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-25 00:36:24.938207 | orchestrator | Sunday 25 May 2025 00:36:24 +0000 (0:00:01.610) 0:04:57.145 ************ 2025-05-25 00:36:30.494891 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:36:30.495004 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:36:30.495672 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:36:30.496304 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:36:30.497686 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:36:30.499291 | orchestrator | changed: [testbed-manager] 2025-05-25 00:36:30.501195 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:36:30.502292 | orchestrator | 2025-05-25 00:36:30.503237 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-25 00:36:30.504206 | orchestrator | Sunday 25 May 2025 00:36:30 +0000 (0:00:05.566) 0:05:02.712 ************ 2025-05-25 00:36:30.911162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:36:30.911801 | orchestrator | 2025-05-25 00:36:30.912477 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-25 00:36:30.916030 | orchestrator | Sunday 25 May 2025 00:36:30 +0000 (0:00:00.419) 0:05:03.131 ************ 2025-05-25 00:36:31.623161 | orchestrator | changed: [testbed-manager] 2025-05-25 00:36:31.626169 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:36:31.626213 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:36:31.626225 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:36:31.626286 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:36:31.626977 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:36:31.627568 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:36:31.628506 | orchestrator | 2025-05-25 00:36:31.629087 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-25 00:36:31.629578 | orchestrator | Sunday 25 May 2025 00:36:31 +0000 (0:00:00.710) 0:05:03.842 ************ 2025-05-25 00:36:33.206879 | orchestrator | ok: [testbed-manager] 2025-05-25 00:36:33.207097 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:36:33.208109 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:36:33.208581 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:36:33.209994 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:36:33.210948 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:36:33.212002 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:36:33.212854 | orchestrator | 2025-05-25 00:36:33.214914 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-25 00:36:33.215481 | orchestrator | Sunday 25 May 2025 00:36:33 +0000 (0:00:01.582) 0:05:05.425 ************ 2025-05-25 00:36:33.954510 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:36:33.954729 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:36:33.955478 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:36:33.956851 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:36:33.957090 | orchestrator | changed: [testbed-manager] 2025-05-25 00:36:33.957360 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:36:33.957865 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:36:33.959062 | orchestrator | 2025-05-25 00:36:33.959303 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-25 00:36:33.960729 | orchestrator | Sunday 25 May 2025 00:36:33 +0000 (0:00:00.748) 0:05:06.174 ************ 2025-05-25 00:36:34.068519 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:36:34.103002 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:36:34.133941 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:36:34.168639 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:36:34.219170 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:36:34.219685 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:36:34.220540 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:36:34.221780 | orchestrator | 2025-05-25 00:36:34.222420 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-25 00:36:34.222911 | orchestrator | Sunday 25 May 2025 00:36:34 +0000 (0:00:00.266) 0:05:06.440 ************ 2025-05-25 00:36:34.282474 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:36:34.310943 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:36:34.342498 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:36:34.372690 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:36:34.406593 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:36:34.594501 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:36:34.594617 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:36:34.595244 | orchestrator | 2025-05-25 00:36:34.596014 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-25 00:36:34.597439 | orchestrator | Sunday 25 May 2025 00:36:34 +0000 (0:00:00.374) 0:05:06.815 ************ 2025-05-25 00:36:34.740928 | orchestrator | ok: [testbed-manager] 2025-05-25 00:36:34.776563 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:36:34.822789 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:36:34.870253 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:36:34.955464 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:36:34.955640 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:36:34.956126 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:36:34.956562 | orchestrator | 2025-05-25 00:36:34.957027 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-25 00:36:34.957519 | orchestrator | Sunday 25 May 2025 00:36:34 +0000 (0:00:00.362) 0:05:07.177 ************ 2025-05-25 00:36:35.037946 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:36:35.081056 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:36:35.116194 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:36:35.146781 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:36:35.230698 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:36:35.231667 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:36:35.232642 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:36:35.233949 | orchestrator | 2025-05-25 00:36:35.234577 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-25 00:36:35.235780 | orchestrator | Sunday 25 May 2025 00:36:35 +0000 (0:00:00.275) 0:05:07.452 ************ 2025-05-25 00:36:35.330255 | orchestrator | ok: [testbed-manager] 2025-05-25 00:36:35.377051 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:36:35.414990 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:36:35.447866 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:36:35.523456 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:36:35.523829 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:36:35.524540 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:36:35.526844 | orchestrator | 2025-05-25 00:36:35.527474 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-25 00:36:35.528965 | orchestrator | Sunday 25 May 2025 00:36:35 +0000 (0:00:00.291) 0:05:07.744 ************ 2025-05-25 00:36:35.625167 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:36:35.664769 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:36:35.696659 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:36:35.729223 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:36:35.786258 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:36:35.786348 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:36:35.786361 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:36:35.786952 | orchestrator | 2025-05-25 00:36:35.787622 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-25 00:36:35.787967 | orchestrator | Sunday 25 May 2025 00:36:35 +0000 (0:00:00.263) 0:05:08.007 ************ 2025-05-25 00:36:35.883634 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:36:35.928371 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:36:35.963342 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:36:36.003235 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:36:36.048271 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:36:36.113543 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:36:36.117064 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:36:36.117095 | orchestrator | 2025-05-25 00:36:36.117109 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-25 00:36:36.117911 | orchestrator | Sunday 25 May 2025 00:36:36 +0000 (0:00:00.326) 0:05:08.333 ************ 2025-05-25 00:36:36.589746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:36:36.589944 | orchestrator | 2025-05-25 00:36:36.590522 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-25 00:36:36.591041 | orchestrator | Sunday 25 May 2025 00:36:36 +0000 (0:00:00.477) 0:05:08.811 ************ 2025-05-25 00:36:37.395732 | orchestrator | ok: [testbed-manager] 2025-05-25 00:36:37.396528 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:36:37.396982 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:36:37.398671 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:36:37.400084 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:36:37.400500 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:36:37.401554 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:36:37.402259 | orchestrator | 2025-05-25 00:36:37.403054 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-25 00:36:37.403729 | orchestrator | Sunday 25 May 2025 00:36:37 +0000 (0:00:00.803) 0:05:09.614 ************ 2025-05-25 00:36:40.079526 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:36:40.080846 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:36:40.082928 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:36:40.083885 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:36:40.085161 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:36:40.085790 | orchestrator | ok: [testbed-manager] 2025-05-25 00:36:40.086816 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:36:40.087689 | orchestrator | 2025-05-25 00:36:40.088308 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-25 00:36:40.089098 | orchestrator | Sunday 25 May 2025 00:36:40 +0000 (0:00:02.686) 0:05:12.300 ************ 2025-05-25 00:36:40.156276 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-25 00:36:40.156641 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-25 00:36:40.157561 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-25 00:36:40.229848 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:36:40.230464 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-25 00:36:40.231053 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-25 00:36:40.231597 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-25 00:36:40.299297 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:36:40.300155 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-25 00:36:40.301252 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-25 00:36:40.302359 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-25 00:36:40.380603 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:36:40.381609 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-25 00:36:40.386136 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-25 00:36:40.386175 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-25 00:36:40.450293 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:36:40.451087 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-25 00:36:40.451992 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-25 00:36:40.456003 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-25 00:36:40.518132 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:36:40.522118 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-25 00:36:40.522152 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-25 00:36:40.522164 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-25 00:36:40.647602 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:36:40.649255 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-25 00:36:40.652130 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-25 00:36:40.652157 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-25 00:36:40.652169 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:36:40.652527 | orchestrator | 2025-05-25 00:36:40.653286 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-25 00:36:40.653933 | orchestrator | Sunday 25 May 2025 00:36:40 +0000 (0:00:00.568) 0:05:12.869 ************ 2025-05-25 00:36:46.926384 | orchestrator | ok: [testbed-manager] 2025-05-25 00:36:46.926673 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:36:46.927556 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:36:46.928430 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:36:46.928898 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:36:46.929546 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:36:46.930846 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:36:46.931565 | orchestrator | 2025-05-25 00:36:46.932226 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-25 00:36:46.932874 | orchestrator | Sunday 25 May 2025 00:36:46 +0000 (0:00:06.275) 0:05:19.144 ************ 2025-05-25 00:36:47.935677 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:36:47.939447 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:36:47.939485 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:36:47.939499 | orchestrator | ok: [testbed-manager] 2025-05-25 00:36:47.939512 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:36:47.939523 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:36:47.939845 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:36:47.940427 | orchestrator | 2025-05-25 00:36:47.941111 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-25 00:36:47.941385 | orchestrator | Sunday 25 May 2025 00:36:47 +0000 (0:00:01.009) 0:05:20.154 ************ 2025-05-25 00:36:55.317286 | orchestrator | ok: [testbed-manager] 2025-05-25 00:36:55.317678 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:36:55.320739 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:36:55.321294 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:36:55.322448 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:36:55.323658 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:36:55.324242 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:36:55.325187 | orchestrator | 2025-05-25 00:36:55.325446 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-25 00:36:55.325890 | orchestrator | Sunday 25 May 2025 00:36:55 +0000 (0:00:07.380) 0:05:27.534 ************ 2025-05-25 00:36:58.499791 | orchestrator | changed: [testbed-manager] 2025-05-25 00:36:58.500707 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:36:58.500877 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:36:58.503328 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:36:58.504371 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:36:58.505099 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:36:58.506009 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:36:58.507174 | orchestrator | 2025-05-25 00:36:58.507550 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-25 00:36:58.508568 | orchestrator | Sunday 25 May 2025 00:36:58 +0000 (0:00:03.183) 0:05:30.718 ************ 2025-05-25 00:36:59.867533 | orchestrator | ok: [testbed-manager] 2025-05-25 00:36:59.868448 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:36:59.868762 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:36:59.870394 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:36:59.871175 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:36:59.872049 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:36:59.873134 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:36:59.873835 | orchestrator | 2025-05-25 00:36:59.874618 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-25 00:36:59.875583 | orchestrator | Sunday 25 May 2025 00:36:59 +0000 (0:00:01.367) 0:05:32.086 ************ 2025-05-25 00:37:01.357967 | orchestrator | ok: [testbed-manager] 2025-05-25 00:37:01.358139 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:37:01.359900 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:37:01.360981 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:37:01.361932 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:37:01.362845 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:37:01.363925 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:37:01.364698 | orchestrator | 2025-05-25 00:37:01.365143 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-25 00:37:01.366109 | orchestrator | Sunday 25 May 2025 00:37:01 +0000 (0:00:01.489) 0:05:33.575 ************ 2025-05-25 00:37:01.566456 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:37:01.649516 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:37:01.714647 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:37:01.784892 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:37:01.964768 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:37:01.965075 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:37:01.965104 | orchestrator | changed: [testbed-manager] 2025-05-25 00:37:01.965985 | orchestrator | 2025-05-25 00:37:01.966485 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-25 00:37:01.967276 | orchestrator | Sunday 25 May 2025 00:37:01 +0000 (0:00:00.610) 0:05:34.185 ************ 2025-05-25 00:37:11.358260 | orchestrator | ok: [testbed-manager] 2025-05-25 00:37:11.358675 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:37:11.359985 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:37:11.361769 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:37:11.362400 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:37:11.362907 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:37:11.363587 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:37:11.363968 | orchestrator | 2025-05-25 00:37:11.365606 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-25 00:37:11.365631 | orchestrator | Sunday 25 May 2025 00:37:11 +0000 (0:00:09.390) 0:05:43.575 ************ 2025-05-25 00:37:12.293889 | orchestrator | changed: [testbed-manager] 2025-05-25 00:37:12.294242 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:37:12.295181 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:37:12.295858 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:37:12.296617 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:37:12.296979 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:37:12.298711 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:37:12.299155 | orchestrator | 2025-05-25 00:37:12.299861 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-25 00:37:12.300530 | orchestrator | Sunday 25 May 2025 00:37:12 +0000 (0:00:00.938) 0:05:44.514 ************ 2025-05-25 00:37:24.095589 | orchestrator | ok: [testbed-manager] 2025-05-25 00:37:24.095713 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:37:24.097492 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:37:24.097990 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:37:24.099085 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:37:24.099687 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:37:24.100565 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:37:24.100824 | orchestrator | 2025-05-25 00:37:24.101829 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-25 00:37:24.102108 | orchestrator | Sunday 25 May 2025 00:37:24 +0000 (0:00:11.797) 0:05:56.312 ************ 2025-05-25 00:37:36.233139 | orchestrator | ok: [testbed-manager] 2025-05-25 00:37:36.233260 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:37:36.233276 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:37:36.233288 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:37:36.233299 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:37:36.237185 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:37:36.239564 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:37:36.240635 | orchestrator | 2025-05-25 00:37:36.241255 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-25 00:37:36.241958 | orchestrator | Sunday 25 May 2025 00:37:36 +0000 (0:00:12.131) 0:06:08.444 ************ 2025-05-25 00:37:36.597558 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-25 00:37:36.673186 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-25 00:37:37.466357 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-25 00:37:37.466471 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-25 00:37:37.466511 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-25 00:37:37.467581 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-25 00:37:37.468466 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-25 00:37:37.469543 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-25 00:37:37.470401 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-25 00:37:37.471023 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-25 00:37:37.472209 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-25 00:37:37.472764 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-25 00:37:37.473573 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-25 00:37:37.475313 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-25 00:37:37.477271 | orchestrator | 2025-05-25 00:37:37.477295 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-25 00:37:37.477598 | orchestrator | Sunday 25 May 2025 00:37:37 +0000 (0:00:01.240) 0:06:09.684 ************ 2025-05-25 00:37:37.610508 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:37:37.672969 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:37:37.742188 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:37:37.801112 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:37:37.863818 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:37:37.987985 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:37:37.988572 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:37:37.990149 | orchestrator | 2025-05-25 00:37:37.990177 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-25 00:37:37.990192 | orchestrator | Sunday 25 May 2025 00:37:37 +0000 (0:00:00.525) 0:06:10.209 ************ 2025-05-25 00:37:41.682785 | orchestrator | ok: [testbed-manager] 2025-05-25 00:37:41.683251 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:37:41.684724 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:37:41.687039 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:37:41.687194 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:37:41.688544 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:37:41.689375 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:37:41.689909 | orchestrator | 2025-05-25 00:37:41.690582 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-25 00:37:41.690981 | orchestrator | Sunday 25 May 2025 00:37:41 +0000 (0:00:03.691) 0:06:13.901 ************ 2025-05-25 00:37:41.816826 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:37:41.881645 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:37:41.953175 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:37:42.170323 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:37:42.234057 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:37:42.338761 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:37:42.338900 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:37:42.339505 | orchestrator | 2025-05-25 00:37:42.340173 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-25 00:37:42.340608 | orchestrator | Sunday 25 May 2025 00:37:42 +0000 (0:00:00.657) 0:06:14.558 ************ 2025-05-25 00:37:42.416154 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-25 00:37:42.416280 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-25 00:37:42.485191 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:37:42.485692 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-25 00:37:42.487300 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-25 00:37:42.553013 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:37:42.553657 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-25 00:37:42.554566 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-25 00:37:42.635391 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:37:42.636257 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-25 00:37:42.637106 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-25 00:37:42.702700 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:37:42.702882 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-25 00:37:42.703686 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-25 00:37:42.769604 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:37:42.770548 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-25 00:37:42.771221 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-25 00:37:42.894566 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:37:42.895616 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-25 00:37:42.897097 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-25 00:37:42.899516 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:37:42.900574 | orchestrator | 2025-05-25 00:37:42.901283 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-25 00:37:42.902648 | orchestrator | Sunday 25 May 2025 00:37:42 +0000 (0:00:00.556) 0:06:15.115 ************ 2025-05-25 00:37:43.023384 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:37:43.094001 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:37:43.156269 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:37:43.218309 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:37:43.286637 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:37:43.374471 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:37:43.374643 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:37:43.375980 | orchestrator | 2025-05-25 00:37:43.380249 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-25 00:37:43.380637 | orchestrator | Sunday 25 May 2025 00:37:43 +0000 (0:00:00.477) 0:06:15.593 ************ 2025-05-25 00:37:43.530234 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:37:43.603396 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:37:43.669106 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:37:43.738956 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:37:43.823053 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:37:43.932655 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:37:43.933381 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:37:43.935075 | orchestrator | 2025-05-25 00:37:43.936301 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-25 00:37:43.937200 | orchestrator | Sunday 25 May 2025 00:37:43 +0000 (0:00:00.558) 0:06:16.152 ************ 2025-05-25 00:37:44.060292 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:37:44.119982 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:37:44.187090 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:37:44.246954 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:37:44.309462 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:37:44.436817 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:37:44.437218 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:37:44.437936 | orchestrator | 2025-05-25 00:37:44.438603 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-25 00:37:44.439503 | orchestrator | Sunday 25 May 2025 00:37:44 +0000 (0:00:00.504) 0:06:16.656 ************ 2025-05-25 00:37:50.360351 | orchestrator | ok: [testbed-manager] 2025-05-25 00:37:50.360579 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:37:50.360603 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:37:50.361129 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:37:50.362490 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:37:50.363640 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:37:50.364893 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:37:50.364920 | orchestrator | 2025-05-25 00:37:50.365118 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-25 00:37:50.365347 | orchestrator | Sunday 25 May 2025 00:37:50 +0000 (0:00:05.923) 0:06:22.579 ************ 2025-05-25 00:37:51.202626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:37:51.203095 | orchestrator | 2025-05-25 00:37:51.203701 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-25 00:37:51.208237 | orchestrator | Sunday 25 May 2025 00:37:51 +0000 (0:00:00.842) 0:06:23.422 ************ 2025-05-25 00:37:52.020763 | orchestrator | ok: [testbed-manager] 2025-05-25 00:37:52.020887 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:37:52.021865 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:37:52.022931 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:37:52.024325 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:37:52.025744 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:37:52.028302 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:37:52.028348 | orchestrator | 2025-05-25 00:37:52.029636 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-25 00:37:52.030119 | orchestrator | Sunday 25 May 2025 00:37:52 +0000 (0:00:00.816) 0:06:24.238 ************ 2025-05-25 00:37:52.414770 | orchestrator | ok: [testbed-manager] 2025-05-25 00:37:52.830617 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:37:52.831178 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:37:52.832331 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:37:52.832810 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:37:52.833648 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:37:52.834306 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:37:52.834680 | orchestrator | 2025-05-25 00:37:52.835544 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-25 00:37:52.835952 | orchestrator | Sunday 25 May 2025 00:37:52 +0000 (0:00:00.812) 0:06:25.051 ************ 2025-05-25 00:37:54.362775 | orchestrator | ok: [testbed-manager] 2025-05-25 00:37:54.363348 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:37:54.364954 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:37:54.365737 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:37:54.366182 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:37:54.366870 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:37:54.367173 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:37:54.367690 | orchestrator | 2025-05-25 00:37:54.367974 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-25 00:37:54.368582 | orchestrator | Sunday 25 May 2025 00:37:54 +0000 (0:00:01.532) 0:06:26.583 ************ 2025-05-25 00:37:54.495908 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:37:55.734831 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:37:55.737966 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:37:55.737999 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:37:55.739151 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:37:55.740593 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:37:55.742014 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:37:55.742659 | orchestrator | 2025-05-25 00:37:55.743302 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-25 00:37:55.744084 | orchestrator | Sunday 25 May 2025 00:37:55 +0000 (0:00:01.368) 0:06:27.952 ************ 2025-05-25 00:37:57.121692 | orchestrator | ok: [testbed-manager] 2025-05-25 00:37:57.122535 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:37:57.123788 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:37:57.124758 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:37:57.125858 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:37:57.126366 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:37:57.126995 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:37:57.127494 | orchestrator | 2025-05-25 00:37:57.128640 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-25 00:37:57.128862 | orchestrator | Sunday 25 May 2025 00:37:57 +0000 (0:00:01.387) 0:06:29.340 ************ 2025-05-25 00:37:58.462627 | orchestrator | changed: [testbed-manager] 2025-05-25 00:37:58.462713 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:37:58.463109 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:37:58.464603 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:37:58.466117 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:37:58.467147 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:37:58.468496 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:37:58.468944 | orchestrator | 2025-05-25 00:37:58.470936 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-25 00:37:58.471448 | orchestrator | Sunday 25 May 2025 00:37:58 +0000 (0:00:01.342) 0:06:30.682 ************ 2025-05-25 00:37:59.435599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:37:59.436140 | orchestrator | 2025-05-25 00:37:59.437008 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-25 00:37:59.437659 | orchestrator | Sunday 25 May 2025 00:37:59 +0000 (0:00:00.973) 0:06:31.656 ************ 2025-05-25 00:38:00.763744 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:00.763856 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:00.763989 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:00.764009 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:00.765553 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:00.767083 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:00.767110 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:00.767122 | orchestrator | 2025-05-25 00:38:00.767367 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-25 00:38:00.768722 | orchestrator | Sunday 25 May 2025 00:38:00 +0000 (0:00:01.326) 0:06:32.982 ************ 2025-05-25 00:38:01.843378 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:01.845061 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:01.845137 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:01.846084 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:01.847355 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:01.848327 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:01.849076 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:01.850171 | orchestrator | 2025-05-25 00:38:01.851400 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-25 00:38:01.851763 | orchestrator | Sunday 25 May 2025 00:38:01 +0000 (0:00:01.079) 0:06:34.062 ************ 2025-05-25 00:38:02.920183 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:02.920360 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:02.921190 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:02.922459 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:02.924618 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:02.924704 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:02.924722 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:02.924734 | orchestrator | 2025-05-25 00:38:02.924802 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-25 00:38:02.925228 | orchestrator | Sunday 25 May 2025 00:38:02 +0000 (0:00:01.078) 0:06:35.140 ************ 2025-05-25 00:38:04.228395 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:04.229194 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:04.231075 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:04.231491 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:04.232319 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:04.232937 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:04.233347 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:04.234137 | orchestrator | 2025-05-25 00:38:04.234236 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-25 00:38:04.234969 | orchestrator | Sunday 25 May 2025 00:38:04 +0000 (0:00:01.306) 0:06:36.447 ************ 2025-05-25 00:38:05.336204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:38:05.336461 | orchestrator | 2025-05-25 00:38:05.336982 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 00:38:05.337637 | orchestrator | Sunday 25 May 2025 00:38:05 +0000 (0:00:00.827) 0:06:37.275 ************ 2025-05-25 00:38:05.338292 | orchestrator | 2025-05-25 00:38:05.338989 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 00:38:05.339264 | orchestrator | Sunday 25 May 2025 00:38:05 +0000 (0:00:00.038) 0:06:37.313 ************ 2025-05-25 00:38:05.340961 | orchestrator | 2025-05-25 00:38:05.341954 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 00:38:05.342850 | orchestrator | Sunday 25 May 2025 00:38:05 +0000 (0:00:00.043) 0:06:37.357 ************ 2025-05-25 00:38:05.343370 | orchestrator | 2025-05-25 00:38:05.344461 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 00:38:05.345688 | orchestrator | Sunday 25 May 2025 00:38:05 +0000 (0:00:00.036) 0:06:37.394 ************ 2025-05-25 00:38:05.346085 | orchestrator | 2025-05-25 00:38:05.349145 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 00:38:05.349363 | orchestrator | Sunday 25 May 2025 00:38:05 +0000 (0:00:00.036) 0:06:37.431 ************ 2025-05-25 00:38:05.350382 | orchestrator | 2025-05-25 00:38:05.350671 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 00:38:05.351991 | orchestrator | Sunday 25 May 2025 00:38:05 +0000 (0:00:00.044) 0:06:37.475 ************ 2025-05-25 00:38:05.352014 | orchestrator | 2025-05-25 00:38:05.352297 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-25 00:38:05.353004 | orchestrator | Sunday 25 May 2025 00:38:05 +0000 (0:00:00.038) 0:06:37.513 ************ 2025-05-25 00:38:05.353409 | orchestrator | 2025-05-25 00:38:05.353764 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-25 00:38:05.354619 | orchestrator | Sunday 25 May 2025 00:38:05 +0000 (0:00:00.038) 0:06:37.551 ************ 2025-05-25 00:38:06.420801 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:06.421049 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:06.422810 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:06.422847 | orchestrator | 2025-05-25 00:38:06.422958 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-25 00:38:06.423357 | orchestrator | Sunday 25 May 2025 00:38:06 +0000 (0:00:01.087) 0:06:38.638 ************ 2025-05-25 00:38:07.956794 | orchestrator | changed: [testbed-manager] 2025-05-25 00:38:07.956970 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:38:07.957765 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:38:07.957882 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:38:07.958410 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:38:07.959637 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:38:07.961054 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:38:07.961076 | orchestrator | 2025-05-25 00:38:07.961089 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-25 00:38:07.961312 | orchestrator | Sunday 25 May 2025 00:38:07 +0000 (0:00:01.535) 0:06:40.174 ************ 2025-05-25 00:38:09.130353 | orchestrator | changed: [testbed-manager] 2025-05-25 00:38:09.130870 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:38:09.132949 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:38:09.134190 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:38:09.134589 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:38:09.136326 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:38:09.137473 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:38:09.138300 | orchestrator | 2025-05-25 00:38:09.139038 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-25 00:38:09.139583 | orchestrator | Sunday 25 May 2025 00:38:09 +0000 (0:00:01.174) 0:06:41.349 ************ 2025-05-25 00:38:09.272554 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:38:11.198062 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:38:11.198406 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:38:11.199224 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:38:11.201814 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:38:11.202094 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:38:11.202369 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:38:11.202790 | orchestrator | 2025-05-25 00:38:11.202994 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-25 00:38:11.203367 | orchestrator | Sunday 25 May 2025 00:38:11 +0000 (0:00:02.069) 0:06:43.418 ************ 2025-05-25 00:38:11.302847 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:38:11.303157 | orchestrator | 2025-05-25 00:38:11.304125 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-25 00:38:11.304793 | orchestrator | Sunday 25 May 2025 00:38:11 +0000 (0:00:00.103) 0:06:43.522 ************ 2025-05-25 00:38:12.322917 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:12.323147 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:38:12.325018 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:38:12.325473 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:38:12.326570 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:38:12.327528 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:38:12.328146 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:38:12.329032 | orchestrator | 2025-05-25 00:38:12.329458 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-25 00:38:12.329929 | orchestrator | Sunday 25 May 2025 00:38:12 +0000 (0:00:01.019) 0:06:44.541 ************ 2025-05-25 00:38:12.453269 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:38:12.514893 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:38:12.583367 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:38:12.645126 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:38:12.708365 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:38:12.986402 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:38:12.987673 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:38:12.994084 | orchestrator | 2025-05-25 00:38:12.994121 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-25 00:38:12.994135 | orchestrator | Sunday 25 May 2025 00:38:12 +0000 (0:00:00.664) 0:06:45.205 ************ 2025-05-25 00:38:13.850690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:38:13.854439 | orchestrator | 2025-05-25 00:38:13.854481 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-25 00:38:13.854496 | orchestrator | Sunday 25 May 2025 00:38:13 +0000 (0:00:00.862) 0:06:46.068 ************ 2025-05-25 00:38:14.679826 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:14.680278 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:14.680372 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:14.680389 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:14.680662 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:14.680992 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:14.681454 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:14.684596 | orchestrator | 2025-05-25 00:38:14.684687 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-25 00:38:14.684704 | orchestrator | Sunday 25 May 2025 00:38:14 +0000 (0:00:00.829) 0:06:46.897 ************ 2025-05-25 00:38:17.249073 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-25 00:38:17.249818 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-25 00:38:17.253071 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-25 00:38:17.253118 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-25 00:38:17.253131 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-25 00:38:17.253143 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-25 00:38:17.253645 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-25 00:38:17.255553 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-25 00:38:17.256181 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-25 00:38:17.257848 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-25 00:38:17.258304 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-25 00:38:17.258807 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-25 00:38:17.259259 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-25 00:38:17.260129 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-25 00:38:17.260579 | orchestrator | 2025-05-25 00:38:17.261105 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-25 00:38:17.261613 | orchestrator | Sunday 25 May 2025 00:38:17 +0000 (0:00:02.570) 0:06:49.468 ************ 2025-05-25 00:38:17.398217 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:38:17.469099 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:38:17.541909 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:38:17.606815 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:38:17.671913 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:38:17.790166 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:38:17.790357 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:38:17.791505 | orchestrator | 2025-05-25 00:38:17.791757 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-25 00:38:17.792631 | orchestrator | Sunday 25 May 2025 00:38:17 +0000 (0:00:00.544) 0:06:50.012 ************ 2025-05-25 00:38:18.588865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:38:18.588979 | orchestrator | 2025-05-25 00:38:18.588995 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-25 00:38:18.589008 | orchestrator | Sunday 25 May 2025 00:38:18 +0000 (0:00:00.789) 0:06:50.802 ************ 2025-05-25 00:38:18.991019 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:19.401884 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:19.402167 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:19.402834 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:19.402937 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:19.403833 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:19.404471 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:19.404948 | orchestrator | 2025-05-25 00:38:19.406187 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-25 00:38:19.406602 | orchestrator | Sunday 25 May 2025 00:38:19 +0000 (0:00:00.815) 0:06:51.618 ************ 2025-05-25 00:38:19.897586 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:19.963693 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:20.040987 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:20.426346 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:20.426483 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:20.426664 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:20.426915 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:20.427336 | orchestrator | 2025-05-25 00:38:20.427675 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-25 00:38:20.428019 | orchestrator | Sunday 25 May 2025 00:38:20 +0000 (0:00:01.026) 0:06:52.644 ************ 2025-05-25 00:38:20.553671 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:38:20.616783 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:38:20.678667 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:38:20.745804 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:38:20.808555 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:38:20.895778 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:38:20.896247 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:38:20.897377 | orchestrator | 2025-05-25 00:38:20.900926 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-25 00:38:20.901158 | orchestrator | Sunday 25 May 2025 00:38:20 +0000 (0:00:00.471) 0:06:53.116 ************ 2025-05-25 00:38:22.289704 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:22.290596 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:22.290780 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:22.291946 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:22.292679 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:22.293376 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:22.294132 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:22.294887 | orchestrator | 2025-05-25 00:38:22.295655 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-25 00:38:22.296848 | orchestrator | Sunday 25 May 2025 00:38:22 +0000 (0:00:01.394) 0:06:54.510 ************ 2025-05-25 00:38:22.417178 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:38:22.484489 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:38:22.547224 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:38:22.609718 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:38:22.678295 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:38:22.773707 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:38:22.774115 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:38:22.775123 | orchestrator | 2025-05-25 00:38:22.776412 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-25 00:38:22.776717 | orchestrator | Sunday 25 May 2025 00:38:22 +0000 (0:00:00.483) 0:06:54.994 ************ 2025-05-25 00:38:24.592212 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:24.592332 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:24.592348 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:24.592693 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:24.593199 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:24.594309 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:24.597235 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:24.597549 | orchestrator | 2025-05-25 00:38:24.598222 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-25 00:38:24.598507 | orchestrator | Sunday 25 May 2025 00:38:24 +0000 (0:00:01.815) 0:06:56.809 ************ 2025-05-25 00:38:26.110400 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:26.110693 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:38:26.111310 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:38:26.112042 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:38:26.112996 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:38:26.113406 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:38:26.113907 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:38:26.114460 | orchestrator | 2025-05-25 00:38:26.114934 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-25 00:38:26.115492 | orchestrator | Sunday 25 May 2025 00:38:26 +0000 (0:00:01.521) 0:06:58.330 ************ 2025-05-25 00:38:27.860731 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:27.860902 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:38:27.862360 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:38:27.863216 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:38:27.864526 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:38:27.865501 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:38:27.867138 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:38:27.867355 | orchestrator | 2025-05-25 00:38:27.868108 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-25 00:38:27.868866 | orchestrator | Sunday 25 May 2025 00:38:27 +0000 (0:00:01.745) 0:07:00.076 ************ 2025-05-25 00:38:29.453214 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:29.453317 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:38:29.454160 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:38:29.457297 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:38:29.458185 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:38:29.458894 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:38:29.459570 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:38:29.460454 | orchestrator | 2025-05-25 00:38:29.460967 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-25 00:38:29.461634 | orchestrator | Sunday 25 May 2025 00:38:29 +0000 (0:00:01.595) 0:07:01.671 ************ 2025-05-25 00:38:30.462865 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:30.465167 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:30.465246 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:30.465261 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:30.466120 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:30.466931 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:30.467683 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:30.467723 | orchestrator | 2025-05-25 00:38:30.468133 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-25 00:38:30.468462 | orchestrator | Sunday 25 May 2025 00:38:30 +0000 (0:00:01.010) 0:07:02.682 ************ 2025-05-25 00:38:30.593651 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:38:30.655062 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:38:30.716514 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:38:30.793800 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:38:30.856259 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:38:31.253306 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:38:31.253962 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:38:31.254372 | orchestrator | 2025-05-25 00:38:31.254962 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-25 00:38:31.255416 | orchestrator | Sunday 25 May 2025 00:38:31 +0000 (0:00:00.789) 0:07:03.471 ************ 2025-05-25 00:38:31.372816 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:38:31.449341 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:38:31.512301 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:38:31.572741 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:38:31.644261 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:38:31.742810 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:38:31.743376 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:38:31.744005 | orchestrator | 2025-05-25 00:38:31.748209 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-25 00:38:31.748238 | orchestrator | Sunday 25 May 2025 00:38:31 +0000 (0:00:00.491) 0:07:03.963 ************ 2025-05-25 00:38:31.876204 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:31.940470 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:32.003578 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:32.088696 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:32.151378 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:32.260735 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:32.260936 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:32.261236 | orchestrator | 2025-05-25 00:38:32.261967 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-25 00:38:32.265241 | orchestrator | Sunday 25 May 2025 00:38:32 +0000 (0:00:00.517) 0:07:04.480 ************ 2025-05-25 00:38:32.395661 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:32.460375 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:32.696641 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:32.760629 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:32.837057 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:32.980477 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:32.981079 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:32.981613 | orchestrator | 2025-05-25 00:38:32.982196 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-25 00:38:32.985981 | orchestrator | Sunday 25 May 2025 00:38:32 +0000 (0:00:00.719) 0:07:05.199 ************ 2025-05-25 00:38:33.111331 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:33.184031 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:33.248973 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:33.311259 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:33.400052 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:33.515730 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:33.515786 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:33.519018 | orchestrator | 2025-05-25 00:38:33.519045 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-25 00:38:33.519617 | orchestrator | Sunday 25 May 2025 00:38:33 +0000 (0:00:00.534) 0:07:05.734 ************ 2025-05-25 00:38:39.344510 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:39.344949 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:39.345957 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:39.347253 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:39.348091 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:39.348785 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:39.350097 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:39.350323 | orchestrator | 2025-05-25 00:38:39.351130 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-25 00:38:39.351745 | orchestrator | Sunday 25 May 2025 00:38:39 +0000 (0:00:05.829) 0:07:11.563 ************ 2025-05-25 00:38:39.482632 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:38:39.558811 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:38:39.630124 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:38:39.697490 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:38:39.758916 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:38:39.877511 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:38:39.879569 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:38:39.882151 | orchestrator | 2025-05-25 00:38:39.883166 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-25 00:38:39.883869 | orchestrator | Sunday 25 May 2025 00:38:39 +0000 (0:00:00.535) 0:07:12.099 ************ 2025-05-25 00:38:40.847662 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:38:40.848005 | orchestrator | 2025-05-25 00:38:40.848608 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-25 00:38:40.849139 | orchestrator | Sunday 25 May 2025 00:38:40 +0000 (0:00:00.969) 0:07:13.068 ************ 2025-05-25 00:38:42.568257 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:42.568362 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:42.568696 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:42.568861 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:42.569329 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:42.570359 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:42.570561 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:42.570929 | orchestrator | 2025-05-25 00:38:42.573536 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-25 00:38:42.574546 | orchestrator | Sunday 25 May 2025 00:38:42 +0000 (0:00:01.720) 0:07:14.789 ************ 2025-05-25 00:38:43.709488 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:43.709608 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:43.709623 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:43.709705 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:43.710512 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:43.711277 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:43.711456 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:43.712056 | orchestrator | 2025-05-25 00:38:43.712558 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-25 00:38:43.712948 | orchestrator | Sunday 25 May 2025 00:38:43 +0000 (0:00:01.137) 0:07:15.926 ************ 2025-05-25 00:38:44.103960 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:44.544546 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:44.545269 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:44.546209 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:44.547180 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:44.548121 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:44.548639 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:44.549915 | orchestrator | 2025-05-25 00:38:44.551123 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-25 00:38:44.552856 | orchestrator | Sunday 25 May 2025 00:38:44 +0000 (0:00:00.838) 0:07:16.764 ************ 2025-05-25 00:38:46.409063 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 00:38:46.410055 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 00:38:46.413214 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 00:38:46.413236 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 00:38:46.413243 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 00:38:46.413250 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 00:38:46.414397 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-25 00:38:46.415126 | orchestrator | 2025-05-25 00:38:46.416147 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-25 00:38:46.416392 | orchestrator | Sunday 25 May 2025 00:38:46 +0000 (0:00:01.863) 0:07:18.628 ************ 2025-05-25 00:38:47.179139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:38:47.179280 | orchestrator | 2025-05-25 00:38:47.180171 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-25 00:38:47.180539 | orchestrator | Sunday 25 May 2025 00:38:47 +0000 (0:00:00.770) 0:07:19.398 ************ 2025-05-25 00:38:55.686419 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:38:55.688090 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:38:55.688151 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:38:55.690366 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:38:55.690792 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:38:55.691602 | orchestrator | changed: [testbed-manager] 2025-05-25 00:38:55.691993 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:38:55.692776 | orchestrator | 2025-05-25 00:38:55.694453 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-25 00:38:55.694493 | orchestrator | Sunday 25 May 2025 00:38:55 +0000 (0:00:08.505) 0:07:27.903 ************ 2025-05-25 00:38:57.631783 | orchestrator | ok: [testbed-manager] 2025-05-25 00:38:57.631897 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:57.633260 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:57.634689 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:57.635963 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:57.637322 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:57.638882 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:57.640292 | orchestrator | 2025-05-25 00:38:57.641380 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-25 00:38:57.643602 | orchestrator | Sunday 25 May 2025 00:38:57 +0000 (0:00:01.945) 0:07:29.848 ************ 2025-05-25 00:38:58.891360 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:38:58.891897 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:38:58.893279 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:38:58.894349 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:38:58.896694 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:38:58.897623 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:38:58.899140 | orchestrator | 2025-05-25 00:38:58.900543 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-25 00:38:58.900840 | orchestrator | Sunday 25 May 2025 00:38:58 +0000 (0:00:01.259) 0:07:31.108 ************ 2025-05-25 00:39:00.096685 | orchestrator | changed: [testbed-manager] 2025-05-25 00:39:00.096829 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:39:00.096914 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:39:00.097801 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:39:00.098705 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:39:00.099704 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:39:00.100579 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:39:00.101560 | orchestrator | 2025-05-25 00:39:00.101983 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-25 00:39:00.102832 | orchestrator | 2025-05-25 00:39:00.103513 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-25 00:39:00.103815 | orchestrator | Sunday 25 May 2025 00:39:00 +0000 (0:00:01.209) 0:07:32.317 ************ 2025-05-25 00:39:00.418266 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:39:00.511413 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:39:00.581418 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:39:00.654127 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:39:00.716541 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:39:00.854545 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:39:00.855888 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:39:00.861863 | orchestrator | 2025-05-25 00:39:00.862563 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-25 00:39:00.863847 | orchestrator | 2025-05-25 00:39:00.863991 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-25 00:39:00.865068 | orchestrator | Sunday 25 May 2025 00:39:00 +0000 (0:00:00.756) 0:07:33.073 ************ 2025-05-25 00:39:02.146009 | orchestrator | changed: [testbed-manager] 2025-05-25 00:39:02.146221 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:39:02.146238 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:39:02.146322 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:39:02.146894 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:39:02.147652 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:39:02.147687 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:39:02.148332 | orchestrator | 2025-05-25 00:39:02.149475 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-25 00:39:02.150168 | orchestrator | Sunday 25 May 2025 00:39:02 +0000 (0:00:01.289) 0:07:34.363 ************ 2025-05-25 00:39:03.548922 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:03.551991 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:39:03.552041 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:39:03.553416 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:39:03.554104 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:39:03.555265 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:39:03.556091 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:39:03.557109 | orchestrator | 2025-05-25 00:39:03.557612 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-25 00:39:03.558492 | orchestrator | Sunday 25 May 2025 00:39:03 +0000 (0:00:01.404) 0:07:35.768 ************ 2025-05-25 00:39:03.680064 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:39:03.745077 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:39:03.805862 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:39:04.021820 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:39:04.084614 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:39:04.486782 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:39:04.487330 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:39:04.488830 | orchestrator | 2025-05-25 00:39:04.489333 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-25 00:39:04.490289 | orchestrator | Sunday 25 May 2025 00:39:04 +0000 (0:00:00.930) 0:07:36.698 ************ 2025-05-25 00:39:05.681410 | orchestrator | changed: [testbed-manager] 2025-05-25 00:39:05.681797 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:39:05.686132 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:39:05.687765 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:39:05.689122 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:39:05.690669 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:39:05.691009 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:39:05.692112 | orchestrator | 2025-05-25 00:39:05.693105 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-25 00:39:05.694766 | orchestrator | 2025-05-25 00:39:05.695557 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-25 00:39:05.696529 | orchestrator | Sunday 25 May 2025 00:39:05 +0000 (0:00:01.205) 0:07:37.904 ************ 2025-05-25 00:39:06.508764 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:39:06.509681 | orchestrator | 2025-05-25 00:39:06.509948 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-25 00:39:06.511000 | orchestrator | Sunday 25 May 2025 00:39:06 +0000 (0:00:00.819) 0:07:38.724 ************ 2025-05-25 00:39:06.974762 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:07.039993 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:39:07.122990 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:39:07.526890 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:39:07.527352 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:39:07.528347 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:39:07.529658 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:39:07.530977 | orchestrator | 2025-05-25 00:39:07.531414 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-25 00:39:07.532232 | orchestrator | Sunday 25 May 2025 00:39:07 +0000 (0:00:01.018) 0:07:39.743 ************ 2025-05-25 00:39:08.672275 | orchestrator | changed: [testbed-manager] 2025-05-25 00:39:08.672493 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:39:08.673338 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:39:08.674106 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:39:08.675127 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:39:08.675486 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:39:08.678973 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:39:08.679021 | orchestrator | 2025-05-25 00:39:08.679035 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-25 00:39:08.679048 | orchestrator | Sunday 25 May 2025 00:39:08 +0000 (0:00:01.144) 0:07:40.887 ************ 2025-05-25 00:39:09.512335 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:39:09.513658 | orchestrator | 2025-05-25 00:39:09.514397 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-25 00:39:09.515531 | orchestrator | Sunday 25 May 2025 00:39:09 +0000 (0:00:00.840) 0:07:41.728 ************ 2025-05-25 00:39:09.964900 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:10.532414 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:39:10.532620 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:39:10.533537 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:39:10.534554 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:39:10.534916 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:39:10.535887 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:39:10.536095 | orchestrator | 2025-05-25 00:39:10.536714 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-25 00:39:10.537152 | orchestrator | Sunday 25 May 2025 00:39:10 +0000 (0:00:01.022) 0:07:42.750 ************ 2025-05-25 00:39:10.933380 | orchestrator | changed: [testbed-manager] 2025-05-25 00:39:11.631171 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:39:11.631780 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:39:11.633307 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:39:11.634234 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:39:11.637091 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:39:11.637161 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:39:11.637177 | orchestrator | 2025-05-25 00:39:11.637191 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:39:11.637892 | orchestrator | 2025-05-25 00:39:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:39:11.637921 | orchestrator | 2025-05-25 00:39:11 | INFO  | Please wait and do not abort execution. 2025-05-25 00:39:11.638315 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-25 00:39:11.638990 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-25 00:39:11.639462 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-25 00:39:11.640473 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-25 00:39:11.640586 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-25 00:39:11.641027 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-25 00:39:11.641627 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-25 00:39:11.641942 | orchestrator | 2025-05-25 00:39:11.642462 | orchestrator | Sunday 25 May 2025 00:39:11 +0000 (0:00:01.098) 0:07:43.848 ************ 2025-05-25 00:39:11.642805 | orchestrator | =============================================================================== 2025-05-25 00:39:11.643602 | orchestrator | osism.commons.packages : Install required packages --------------------- 82.45s 2025-05-25 00:39:11.643940 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.97s 2025-05-25 00:39:11.644345 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.12s 2025-05-25 00:39:11.644825 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.94s 2025-05-25 00:39:11.645279 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.13s 2025-05-25 00:39:11.645657 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 11.80s 2025-05-25 00:39:11.646159 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.73s 2025-05-25 00:39:11.646573 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.25s 2025-05-25 00:39:11.647080 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.39s 2025-05-25 00:39:11.647524 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.51s 2025-05-25 00:39:11.648776 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.15s 2025-05-25 00:39:11.649032 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.93s 2025-05-25 00:39:11.649462 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.53s 2025-05-25 00:39:11.649885 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.38s 2025-05-25 00:39:11.650256 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.31s 2025-05-25 00:39:11.650807 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.28s 2025-05-25 00:39:11.650829 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 5.92s 2025-05-25 00:39:11.651135 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.89s 2025-05-25 00:39:11.651744 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.83s 2025-05-25 00:39:11.651899 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.81s 2025-05-25 00:39:12.218260 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-25 00:39:12.218358 | orchestrator | + osism apply network 2025-05-25 00:39:14.026660 | orchestrator | 2025-05-25 00:39:14 | INFO  | Task e729784a-91d2-4950-a52f-fe593e8d66d9 (network) was prepared for execution. 2025-05-25 00:39:14.026819 | orchestrator | 2025-05-25 00:39:14 | INFO  | It takes a moment until task e729784a-91d2-4950-a52f-fe593e8d66d9 (network) has been started and output is visible here. 2025-05-25 00:39:17.335316 | orchestrator | 2025-05-25 00:39:17.336227 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-25 00:39:17.338270 | orchestrator | 2025-05-25 00:39:17.339272 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-25 00:39:17.341790 | orchestrator | Sunday 25 May 2025 00:39:17 +0000 (0:00:00.199) 0:00:00.199 ************ 2025-05-25 00:39:17.479768 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:17.553161 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:39:17.626964 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:39:17.701218 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:39:17.776974 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:39:18.011923 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:39:18.012945 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:39:18.013144 | orchestrator | 2025-05-25 00:39:18.014316 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-25 00:39:18.015236 | orchestrator | Sunday 25 May 2025 00:39:18 +0000 (0:00:00.677) 0:00:00.876 ************ 2025-05-25 00:39:19.220936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:39:19.221540 | orchestrator | 2025-05-25 00:39:19.222173 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-25 00:39:19.223162 | orchestrator | Sunday 25 May 2025 00:39:19 +0000 (0:00:01.207) 0:00:02.083 ************ 2025-05-25 00:39:21.060866 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:21.061384 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:39:21.062960 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:39:21.065070 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:39:21.066191 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:39:21.067072 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:39:21.067815 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:39:21.068933 | orchestrator | 2025-05-25 00:39:21.069927 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-25 00:39:21.069958 | orchestrator | Sunday 25 May 2025 00:39:21 +0000 (0:00:01.837) 0:00:03.921 ************ 2025-05-25 00:39:22.693896 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:22.694879 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:39:22.696117 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:39:22.697318 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:39:22.698874 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:39:22.699925 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:39:22.700806 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:39:22.701759 | orchestrator | 2025-05-25 00:39:22.702271 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-25 00:39:22.703081 | orchestrator | Sunday 25 May 2025 00:39:22 +0000 (0:00:01.634) 0:00:05.555 ************ 2025-05-25 00:39:23.162609 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-25 00:39:23.747941 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-25 00:39:23.748413 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-25 00:39:23.748897 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-25 00:39:23.749448 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-25 00:39:23.750194 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-25 00:39:23.750597 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-25 00:39:23.751042 | orchestrator | 2025-05-25 00:39:23.751444 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-25 00:39:23.751883 | orchestrator | Sunday 25 May 2025 00:39:23 +0000 (0:00:01.056) 0:00:06.612 ************ 2025-05-25 00:39:25.391063 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 00:39:25.391198 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-25 00:39:25.395132 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 00:39:25.395172 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-25 00:39:25.395185 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-25 00:39:25.395196 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-25 00:39:25.395207 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-25 00:39:25.395219 | orchestrator | 2025-05-25 00:39:25.396117 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-25 00:39:25.396581 | orchestrator | Sunday 25 May 2025 00:39:25 +0000 (0:00:01.643) 0:00:08.255 ************ 2025-05-25 00:39:27.020212 | orchestrator | changed: [testbed-manager] 2025-05-25 00:39:27.020319 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:39:27.020567 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:39:27.021654 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:39:27.023022 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:39:27.023743 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:39:27.025024 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:39:27.025999 | orchestrator | 2025-05-25 00:39:27.026967 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-25 00:39:27.027926 | orchestrator | Sunday 25 May 2025 00:39:27 +0000 (0:00:01.623) 0:00:09.879 ************ 2025-05-25 00:39:27.558471 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 00:39:27.993287 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 00:39:27.994344 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-25 00:39:27.994723 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-25 00:39:27.996046 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-25 00:39:27.998981 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-25 00:39:27.999029 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-25 00:39:27.999051 | orchestrator | 2025-05-25 00:39:27.999071 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-25 00:39:27.999090 | orchestrator | Sunday 25 May 2025 00:39:27 +0000 (0:00:00.981) 0:00:10.860 ************ 2025-05-25 00:39:28.418243 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:28.502253 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:39:29.089083 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:39:29.089563 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:39:29.090625 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:39:29.093988 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:39:29.094011 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:39:29.094064 | orchestrator | 2025-05-25 00:39:29.094076 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-25 00:39:29.094088 | orchestrator | Sunday 25 May 2025 00:39:29 +0000 (0:00:01.090) 0:00:11.951 ************ 2025-05-25 00:39:29.261721 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:39:29.341142 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:39:29.421357 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:39:29.497505 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:39:29.573043 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:39:29.859055 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:39:29.859632 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:39:29.860170 | orchestrator | 2025-05-25 00:39:29.860696 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-25 00:39:29.861415 | orchestrator | Sunday 25 May 2025 00:39:29 +0000 (0:00:00.772) 0:00:12.723 ************ 2025-05-25 00:39:31.751732 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:31.752725 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:39:31.756001 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:39:31.756028 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:39:31.756040 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:39:31.756675 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:39:31.757712 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:39:31.759957 | orchestrator | 2025-05-25 00:39:31.760003 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-25 00:39:31.760046 | orchestrator | Sunday 25 May 2025 00:39:31 +0000 (0:00:01.893) 0:00:14.617 ************ 2025-05-25 00:39:33.537556 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-25 00:39:33.538794 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-25 00:39:33.539344 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-25 00:39:33.540588 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-25 00:39:33.541851 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-25 00:39:33.542695 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-25 00:39:33.543117 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-25 00:39:33.544206 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-25 00:39:33.544656 | orchestrator | 2025-05-25 00:39:33.545374 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-25 00:39:33.546159 | orchestrator | Sunday 25 May 2025 00:39:33 +0000 (0:00:01.781) 0:00:16.398 ************ 2025-05-25 00:39:35.070855 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:35.072958 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:39:35.073169 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:39:35.074119 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:39:35.075169 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:39:35.076662 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:39:35.077895 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:39:35.079137 | orchestrator | 2025-05-25 00:39:35.080284 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-25 00:39:35.081019 | orchestrator | Sunday 25 May 2025 00:39:35 +0000 (0:00:01.536) 0:00:17.935 ************ 2025-05-25 00:39:36.454523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:39:36.454908 | orchestrator | 2025-05-25 00:39:36.454939 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-25 00:39:36.455633 | orchestrator | Sunday 25 May 2025 00:39:36 +0000 (0:00:01.380) 0:00:19.315 ************ 2025-05-25 00:39:37.014159 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:37.430241 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:39:37.430909 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:39:37.434273 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:39:37.435297 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:39:37.436221 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:39:37.437067 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:39:37.437800 | orchestrator | 2025-05-25 00:39:37.438717 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-25 00:39:37.439179 | orchestrator | Sunday 25 May 2025 00:39:37 +0000 (0:00:00.979) 0:00:20.295 ************ 2025-05-25 00:39:37.583312 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:37.663990 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:39:37.907698 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:39:38.001398 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:39:38.085767 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:39:38.229873 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:39:38.230228 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:39:38.230853 | orchestrator | 2025-05-25 00:39:38.232166 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-25 00:39:38.232297 | orchestrator | Sunday 25 May 2025 00:39:38 +0000 (0:00:00.795) 0:00:21.091 ************ 2025-05-25 00:39:38.617551 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 00:39:38.617840 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 00:39:38.786402 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 00:39:38.786608 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 00:39:39.341246 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 00:39:39.341631 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 00:39:39.342310 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 00:39:39.343962 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 00:39:39.345232 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 00:39:39.346508 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 00:39:39.347405 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 00:39:39.348810 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 00:39:39.348842 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-25 00:39:39.349593 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-25 00:39:39.350265 | orchestrator | 2025-05-25 00:39:39.350951 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-25 00:39:39.351494 | orchestrator | Sunday 25 May 2025 00:39:39 +0000 (0:00:01.114) 0:00:22.206 ************ 2025-05-25 00:39:39.661850 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:39:39.745033 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:39:39.829731 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:39:39.913337 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:39:39.999783 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:39:41.123395 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:39:41.123536 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:39:41.124212 | orchestrator | 2025-05-25 00:39:41.125749 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-25 00:39:41.127724 | orchestrator | Sunday 25 May 2025 00:39:41 +0000 (0:00:01.779) 0:00:23.986 ************ 2025-05-25 00:39:41.283093 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:39:41.359890 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:39:41.597502 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:39:41.675084 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:39:41.754781 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:39:41.792166 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:39:41.792521 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:39:41.792796 | orchestrator | 2025-05-25 00:39:41.796254 | orchestrator | 2025-05-25 00:39:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:39:41.796289 | orchestrator | 2025-05-25 00:39:41 | INFO  | Please wait and do not abort execution. 2025-05-25 00:39:41.796355 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:39:41.797324 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 00:39:41.797840 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 00:39:41.798281 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 00:39:41.798648 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 00:39:41.800110 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 00:39:41.801043 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 00:39:41.802457 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 00:39:41.802490 | orchestrator | 2025-05-25 00:39:41.802511 | orchestrator | Sunday 25 May 2025 00:39:41 +0000 (0:00:00.673) 0:00:24.659 ************ 2025-05-25 00:39:41.802598 | orchestrator | =============================================================================== 2025-05-25 00:39:41.802820 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.89s 2025-05-25 00:39:41.803371 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.84s 2025-05-25 00:39:41.803680 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.78s 2025-05-25 00:39:41.803944 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.78s 2025-05-25 00:39:41.804067 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.64s 2025-05-25 00:39:41.804373 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.63s 2025-05-25 00:39:41.804676 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.62s 2025-05-25 00:39:41.804852 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.54s 2025-05-25 00:39:41.808933 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.38s 2025-05-25 00:39:41.808966 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.21s 2025-05-25 00:39:41.809051 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.11s 2025-05-25 00:39:41.810930 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.09s 2025-05-25 00:39:41.811245 | orchestrator | osism.commons.network : Create required directories --------------------- 1.06s 2025-05-25 00:39:41.811670 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 0.98s 2025-05-25 00:39:41.811765 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2025-05-25 00:39:41.812343 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.80s 2025-05-25 00:39:41.812382 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.77s 2025-05-25 00:39:41.812551 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.68s 2025-05-25 00:39:41.812792 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.67s 2025-05-25 00:39:42.316265 | orchestrator | + osism apply wireguard 2025-05-25 00:39:43.708875 | orchestrator | 2025-05-25 00:39:43 | INFO  | Task 681b1e6d-7090-459e-809b-ffdb2424df02 (wireguard) was prepared for execution. 2025-05-25 00:39:43.708982 | orchestrator | 2025-05-25 00:39:43 | INFO  | It takes a moment until task 681b1e6d-7090-459e-809b-ffdb2424df02 (wireguard) has been started and output is visible here. 2025-05-25 00:39:46.774587 | orchestrator | 2025-05-25 00:39:46.774930 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-25 00:39:46.775400 | orchestrator | 2025-05-25 00:39:46.777122 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-25 00:39:46.778463 | orchestrator | Sunday 25 May 2025 00:39:46 +0000 (0:00:00.162) 0:00:00.162 ************ 2025-05-25 00:39:48.201542 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:48.201678 | orchestrator | 2025-05-25 00:39:48.202043 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-25 00:39:48.203377 | orchestrator | Sunday 25 May 2025 00:39:48 +0000 (0:00:01.430) 0:00:01.593 ************ 2025-05-25 00:39:54.370172 | orchestrator | changed: [testbed-manager] 2025-05-25 00:39:54.370355 | orchestrator | 2025-05-25 00:39:54.371068 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-25 00:39:54.372729 | orchestrator | Sunday 25 May 2025 00:39:54 +0000 (0:00:06.166) 0:00:07.760 ************ 2025-05-25 00:39:54.892355 | orchestrator | changed: [testbed-manager] 2025-05-25 00:39:54.892851 | orchestrator | 2025-05-25 00:39:54.894364 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-25 00:39:54.896113 | orchestrator | Sunday 25 May 2025 00:39:54 +0000 (0:00:00.525) 0:00:08.285 ************ 2025-05-25 00:39:55.304802 | orchestrator | changed: [testbed-manager] 2025-05-25 00:39:55.304965 | orchestrator | 2025-05-25 00:39:55.305939 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-25 00:39:55.306629 | orchestrator | Sunday 25 May 2025 00:39:55 +0000 (0:00:00.411) 0:00:08.697 ************ 2025-05-25 00:39:55.800975 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:55.801165 | orchestrator | 2025-05-25 00:39:55.801908 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-25 00:39:55.803535 | orchestrator | Sunday 25 May 2025 00:39:55 +0000 (0:00:00.496) 0:00:09.193 ************ 2025-05-25 00:39:56.334202 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:56.334976 | orchestrator | 2025-05-25 00:39:56.335052 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-25 00:39:56.336105 | orchestrator | Sunday 25 May 2025 00:39:56 +0000 (0:00:00.534) 0:00:09.727 ************ 2025-05-25 00:39:56.791981 | orchestrator | ok: [testbed-manager] 2025-05-25 00:39:56.793555 | orchestrator | 2025-05-25 00:39:56.793590 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-25 00:39:56.793603 | orchestrator | Sunday 25 May 2025 00:39:56 +0000 (0:00:00.455) 0:00:10.183 ************ 2025-05-25 00:39:57.977561 | orchestrator | changed: [testbed-manager] 2025-05-25 00:39:57.978386 | orchestrator | 2025-05-25 00:39:57.978716 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-25 00:39:57.979501 | orchestrator | Sunday 25 May 2025 00:39:57 +0000 (0:00:01.184) 0:00:11.367 ************ 2025-05-25 00:39:58.878491 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-25 00:39:58.880380 | orchestrator | changed: [testbed-manager] 2025-05-25 00:39:58.880412 | orchestrator | 2025-05-25 00:39:58.880773 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-25 00:39:58.881715 | orchestrator | Sunday 25 May 2025 00:39:58 +0000 (0:00:00.901) 0:00:12.269 ************ 2025-05-25 00:40:00.573243 | orchestrator | changed: [testbed-manager] 2025-05-25 00:40:00.574100 | orchestrator | 2025-05-25 00:40:00.575834 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-25 00:40:00.576724 | orchestrator | Sunday 25 May 2025 00:40:00 +0000 (0:00:01.694) 0:00:13.964 ************ 2025-05-25 00:40:01.500689 | orchestrator | changed: [testbed-manager] 2025-05-25 00:40:01.500824 | orchestrator | 2025-05-25 00:40:01.501167 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:40:01.501813 | orchestrator | 2025-05-25 00:40:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:40:01.501837 | orchestrator | 2025-05-25 00:40:01 | INFO  | Please wait and do not abort execution. 2025-05-25 00:40:01.502149 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:40:01.502920 | orchestrator | 2025-05-25 00:40:01.503601 | orchestrator | Sunday 25 May 2025 00:40:01 +0000 (0:00:00.929) 0:00:14.893 ************ 2025-05-25 00:40:01.505767 | orchestrator | =============================================================================== 2025-05-25 00:40:01.506404 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.17s 2025-05-25 00:40:01.506939 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-05-25 00:40:01.507155 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.43s 2025-05-25 00:40:01.507787 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.18s 2025-05-25 00:40:01.508119 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.93s 2025-05-25 00:40:01.508495 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.90s 2025-05-25 00:40:01.509013 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-05-25 00:40:01.509458 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2025-05-25 00:40:01.510294 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.50s 2025-05-25 00:40:01.511056 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2025-05-25 00:40:01.511818 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2025-05-25 00:40:01.992571 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-25 00:40:02.030726 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-25 00:40:02.030760 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-25 00:40:02.106176 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 185 0 --:--:-- --:--:-- --:--:-- 186 2025-05-25 00:40:02.120598 | orchestrator | + osism apply --environment custom workarounds 2025-05-25 00:40:03.494329 | orchestrator | 2025-05-25 00:40:03 | INFO  | Trying to run play workarounds in environment custom 2025-05-25 00:40:03.539908 | orchestrator | 2025-05-25 00:40:03 | INFO  | Task 28c4ac04-f33c-4fcb-922f-19affed20e1e (workarounds) was prepared for execution. 2025-05-25 00:40:03.540003 | orchestrator | 2025-05-25 00:40:03 | INFO  | It takes a moment until task 28c4ac04-f33c-4fcb-922f-19affed20e1e (workarounds) has been started and output is visible here. 2025-05-25 00:40:06.594373 | orchestrator | 2025-05-25 00:40:06.597009 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:40:06.597042 | orchestrator | 2025-05-25 00:40:06.597055 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-25 00:40:06.597068 | orchestrator | Sunday 25 May 2025 00:40:06 +0000 (0:00:00.168) 0:00:00.168 ************ 2025-05-25 00:40:06.776749 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-25 00:40:06.858338 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-25 00:40:06.937416 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-25 00:40:07.018146 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-25 00:40:07.099913 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-25 00:40:07.359271 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-25 00:40:07.359375 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-25 00:40:07.361619 | orchestrator | 2025-05-25 00:40:07.361660 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-25 00:40:07.363177 | orchestrator | 2025-05-25 00:40:07.363238 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-25 00:40:07.364471 | orchestrator | Sunday 25 May 2025 00:40:07 +0000 (0:00:00.766) 0:00:00.935 ************ 2025-05-25 00:40:09.940783 | orchestrator | ok: [testbed-manager] 2025-05-25 00:40:09.942898 | orchestrator | 2025-05-25 00:40:09.946075 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-25 00:40:09.946144 | orchestrator | 2025-05-25 00:40:09.946165 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-25 00:40:09.946184 | orchestrator | Sunday 25 May 2025 00:40:09 +0000 (0:00:02.579) 0:00:03.514 ************ 2025-05-25 00:40:11.744044 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:40:11.744220 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:40:11.744933 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:40:11.745604 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:40:11.745692 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:40:11.746003 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:40:11.746952 | orchestrator | 2025-05-25 00:40:11.747066 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-25 00:40:11.747487 | orchestrator | 2025-05-25 00:40:11.748202 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-25 00:40:11.748224 | orchestrator | Sunday 25 May 2025 00:40:11 +0000 (0:00:01.804) 0:00:05.319 ************ 2025-05-25 00:40:13.201503 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-25 00:40:13.202561 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-25 00:40:13.204587 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-25 00:40:13.206107 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-25 00:40:13.207513 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-25 00:40:13.208090 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-25 00:40:13.208908 | orchestrator | 2025-05-25 00:40:13.209609 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-25 00:40:13.210699 | orchestrator | Sunday 25 May 2025 00:40:13 +0000 (0:00:01.458) 0:00:06.777 ************ 2025-05-25 00:40:17.029986 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:40:17.030480 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:40:17.032042 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:40:17.033243 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:40:17.033838 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:40:17.034901 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:40:17.035483 | orchestrator | 2025-05-25 00:40:17.036495 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-25 00:40:17.037007 | orchestrator | Sunday 25 May 2025 00:40:17 +0000 (0:00:03.832) 0:00:10.609 ************ 2025-05-25 00:40:17.174136 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:40:17.248255 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:40:17.325619 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:40:17.547059 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:40:17.678889 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:40:17.679055 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:40:17.683370 | orchestrator | 2025-05-25 00:40:17.683629 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-25 00:40:17.685051 | orchestrator | 2025-05-25 00:40:17.685694 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-25 00:40:17.686117 | orchestrator | Sunday 25 May 2025 00:40:17 +0000 (0:00:00.647) 0:00:11.257 ************ 2025-05-25 00:40:19.323006 | orchestrator | changed: [testbed-manager] 2025-05-25 00:40:19.323110 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:40:19.323180 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:40:19.324261 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:40:19.324824 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:40:19.326396 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:40:19.327898 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:40:19.329223 | orchestrator | 2025-05-25 00:40:19.330268 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-25 00:40:19.332057 | orchestrator | Sunday 25 May 2025 00:40:19 +0000 (0:00:01.644) 0:00:12.902 ************ 2025-05-25 00:40:20.953914 | orchestrator | changed: [testbed-manager] 2025-05-25 00:40:20.954056 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:40:20.957445 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:40:20.957467 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:40:20.957474 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:40:20.957889 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:40:20.958964 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:40:20.960899 | orchestrator | 2025-05-25 00:40:20.961333 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-25 00:40:20.962118 | orchestrator | Sunday 25 May 2025 00:40:20 +0000 (0:00:01.627) 0:00:14.530 ************ 2025-05-25 00:40:22.414925 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:40:22.415368 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:40:22.417861 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:40:22.417901 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:40:22.417913 | orchestrator | ok: [testbed-manager] 2025-05-25 00:40:22.419234 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:40:22.419925 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:40:22.420552 | orchestrator | 2025-05-25 00:40:22.421082 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-25 00:40:22.421737 | orchestrator | Sunday 25 May 2025 00:40:22 +0000 (0:00:01.464) 0:00:15.994 ************ 2025-05-25 00:40:24.118175 | orchestrator | changed: [testbed-manager] 2025-05-25 00:40:24.118846 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:40:24.121253 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:40:24.122102 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:40:24.123214 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:40:24.124049 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:40:24.124413 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:40:24.125253 | orchestrator | 2025-05-25 00:40:24.125836 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-25 00:40:24.126628 | orchestrator | Sunday 25 May 2025 00:40:24 +0000 (0:00:01.702) 0:00:17.697 ************ 2025-05-25 00:40:24.266306 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:40:24.340658 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:40:24.415641 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:40:24.489265 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:40:24.716233 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:40:24.854292 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:40:24.854732 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:40:24.855632 | orchestrator | 2025-05-25 00:40:24.856575 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-25 00:40:24.859398 | orchestrator | 2025-05-25 00:40:24.859437 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-25 00:40:24.859445 | orchestrator | Sunday 25 May 2025 00:40:24 +0000 (0:00:00.736) 0:00:18.434 ************ 2025-05-25 00:40:27.301966 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:40:27.303165 | orchestrator | ok: [testbed-manager] 2025-05-25 00:40:27.304291 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:40:27.305222 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:40:27.306207 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:40:27.307145 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:40:27.307979 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:40:27.308975 | orchestrator | 2025-05-25 00:40:27.309718 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:40:27.310101 | orchestrator | 2025-05-25 00:40:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:40:27.310234 | orchestrator | 2025-05-25 00:40:27 | INFO  | Please wait and do not abort execution. 2025-05-25 00:40:27.311240 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:40:27.311986 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:40:27.312035 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:40:27.312529 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:40:27.312787 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:40:27.313015 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:40:27.313331 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:40:27.313743 | orchestrator | 2025-05-25 00:40:27.314213 | orchestrator | Sunday 25 May 2025 00:40:27 +0000 (0:00:02.445) 0:00:20.879 ************ 2025-05-25 00:40:27.314574 | orchestrator | =============================================================================== 2025-05-25 00:40:27.315639 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.83s 2025-05-25 00:40:27.315718 | orchestrator | Apply netplan configuration --------------------------------------------- 2.58s 2025-05-25 00:40:27.316125 | orchestrator | Install python3-docker -------------------------------------------------- 2.45s 2025-05-25 00:40:27.316146 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2025-05-25 00:40:27.316323 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.70s 2025-05-25 00:40:27.316703 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2025-05-25 00:40:27.316871 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.63s 2025-05-25 00:40:27.317173 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.46s 2025-05-25 00:40:27.317517 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.46s 2025-05-25 00:40:27.317861 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2025-05-25 00:40:27.318105 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.74s 2025-05-25 00:40:27.318452 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2025-05-25 00:40:27.826150 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-25 00:40:29.248804 | orchestrator | 2025-05-25 00:40:29 | INFO  | Task 4511ae2d-abd9-4be8-bd85-fe2ae21e1f30 (reboot) was prepared for execution. 2025-05-25 00:40:29.248895 | orchestrator | 2025-05-25 00:40:29 | INFO  | It takes a moment until task 4511ae2d-abd9-4be8-bd85-fe2ae21e1f30 (reboot) has been started and output is visible here. 2025-05-25 00:40:32.228114 | orchestrator | 2025-05-25 00:40:32.228818 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-25 00:40:32.230707 | orchestrator | 2025-05-25 00:40:32.231891 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-25 00:40:32.233026 | orchestrator | Sunday 25 May 2025 00:40:32 +0000 (0:00:00.142) 0:00:00.142 ************ 2025-05-25 00:40:32.319551 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:40:32.320349 | orchestrator | 2025-05-25 00:40:32.321051 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-25 00:40:32.322332 | orchestrator | Sunday 25 May 2025 00:40:32 +0000 (0:00:00.093) 0:00:00.235 ************ 2025-05-25 00:40:33.236412 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:40:33.237477 | orchestrator | 2025-05-25 00:40:33.237934 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-25 00:40:33.239892 | orchestrator | Sunday 25 May 2025 00:40:33 +0000 (0:00:00.914) 0:00:01.150 ************ 2025-05-25 00:40:33.348178 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:40:33.348566 | orchestrator | 2025-05-25 00:40:33.349633 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-25 00:40:33.350468 | orchestrator | 2025-05-25 00:40:33.351789 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-25 00:40:33.352053 | orchestrator | Sunday 25 May 2025 00:40:33 +0000 (0:00:00.114) 0:00:01.265 ************ 2025-05-25 00:40:33.440820 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:40:33.441390 | orchestrator | 2025-05-25 00:40:33.442109 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-25 00:40:33.442609 | orchestrator | Sunday 25 May 2025 00:40:33 +0000 (0:00:00.092) 0:00:01.357 ************ 2025-05-25 00:40:34.086804 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:40:34.087613 | orchestrator | 2025-05-25 00:40:34.088611 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-25 00:40:34.089368 | orchestrator | Sunday 25 May 2025 00:40:34 +0000 (0:00:00.646) 0:00:02.003 ************ 2025-05-25 00:40:34.207522 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:40:34.207824 | orchestrator | 2025-05-25 00:40:34.208503 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-25 00:40:34.209512 | orchestrator | 2025-05-25 00:40:34.211479 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-25 00:40:34.212209 | orchestrator | Sunday 25 May 2025 00:40:34 +0000 (0:00:00.118) 0:00:02.122 ************ 2025-05-25 00:40:34.311660 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:40:34.313080 | orchestrator | 2025-05-25 00:40:34.313108 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-25 00:40:34.314572 | orchestrator | Sunday 25 May 2025 00:40:34 +0000 (0:00:00.105) 0:00:02.227 ************ 2025-05-25 00:40:35.063101 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:40:35.063633 | orchestrator | 2025-05-25 00:40:35.064000 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-25 00:40:35.064611 | orchestrator | Sunday 25 May 2025 00:40:35 +0000 (0:00:00.752) 0:00:02.979 ************ 2025-05-25 00:40:35.165682 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:40:35.166149 | orchestrator | 2025-05-25 00:40:35.166964 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-25 00:40:35.168782 | orchestrator | 2025-05-25 00:40:35.168805 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-25 00:40:35.169220 | orchestrator | Sunday 25 May 2025 00:40:35 +0000 (0:00:00.100) 0:00:03.080 ************ 2025-05-25 00:40:35.261213 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:40:35.262059 | orchestrator | 2025-05-25 00:40:35.263524 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-25 00:40:35.264194 | orchestrator | Sunday 25 May 2025 00:40:35 +0000 (0:00:00.097) 0:00:03.177 ************ 2025-05-25 00:40:35.926614 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:40:35.927265 | orchestrator | 2025-05-25 00:40:35.927946 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-25 00:40:35.928516 | orchestrator | Sunday 25 May 2025 00:40:35 +0000 (0:00:00.665) 0:00:03.842 ************ 2025-05-25 00:40:36.052760 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:40:36.053375 | orchestrator | 2025-05-25 00:40:36.055000 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-25 00:40:36.055506 | orchestrator | 2025-05-25 00:40:36.055835 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-25 00:40:36.056826 | orchestrator | Sunday 25 May 2025 00:40:36 +0000 (0:00:00.122) 0:00:03.965 ************ 2025-05-25 00:40:36.173827 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:40:36.174330 | orchestrator | 2025-05-25 00:40:36.174750 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-25 00:40:36.175223 | orchestrator | Sunday 25 May 2025 00:40:36 +0000 (0:00:00.120) 0:00:04.086 ************ 2025-05-25 00:40:36.829005 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:40:36.829132 | orchestrator | 2025-05-25 00:40:36.829153 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-25 00:40:36.829169 | orchestrator | Sunday 25 May 2025 00:40:36 +0000 (0:00:00.654) 0:00:04.740 ************ 2025-05-25 00:40:36.928985 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:40:36.929398 | orchestrator | 2025-05-25 00:40:36.930113 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-25 00:40:36.930441 | orchestrator | 2025-05-25 00:40:36.930988 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-25 00:40:36.932099 | orchestrator | Sunday 25 May 2025 00:40:36 +0000 (0:00:00.102) 0:00:04.843 ************ 2025-05-25 00:40:37.024644 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:40:37.026001 | orchestrator | 2025-05-25 00:40:37.026567 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-25 00:40:37.027477 | orchestrator | Sunday 25 May 2025 00:40:37 +0000 (0:00:00.097) 0:00:04.941 ************ 2025-05-25 00:40:37.674712 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:40:37.675134 | orchestrator | 2025-05-25 00:40:37.676233 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-25 00:40:37.676831 | orchestrator | Sunday 25 May 2025 00:40:37 +0000 (0:00:00.647) 0:00:05.589 ************ 2025-05-25 00:40:37.705802 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:40:37.705997 | orchestrator | 2025-05-25 00:40:37.706366 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:40:37.706647 | orchestrator | 2025-05-25 00:40:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:40:37.707056 | orchestrator | 2025-05-25 00:40:37 | INFO  | Please wait and do not abort execution. 2025-05-25 00:40:37.707738 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:40:37.708182 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:40:37.709203 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:40:37.710157 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:40:37.710700 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:40:37.711079 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:40:37.711736 | orchestrator | 2025-05-25 00:40:37.712187 | orchestrator | Sunday 25 May 2025 00:40:37 +0000 (0:00:00.034) 0:00:05.623 ************ 2025-05-25 00:40:37.712899 | orchestrator | =============================================================================== 2025-05-25 00:40:37.714175 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.28s 2025-05-25 00:40:37.714716 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.61s 2025-05-25 00:40:37.715363 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2025-05-25 00:40:38.200959 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-25 00:40:39.603554 | orchestrator | 2025-05-25 00:40:39 | INFO  | Task 85d1c46c-f86c-47ed-b6c6-0183623df092 (wait-for-connection) was prepared for execution. 2025-05-25 00:40:39.603648 | orchestrator | 2025-05-25 00:40:39 | INFO  | It takes a moment until task 85d1c46c-f86c-47ed-b6c6-0183623df092 (wait-for-connection) has been started and output is visible here. 2025-05-25 00:40:42.641379 | orchestrator | 2025-05-25 00:40:42.641685 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-25 00:40:42.645512 | orchestrator | 2025-05-25 00:40:42.645543 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-25 00:40:42.645571 | orchestrator | Sunday 25 May 2025 00:40:42 +0000 (0:00:00.169) 0:00:00.169 ************ 2025-05-25 00:40:55.711805 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:40:55.711970 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:40:55.711988 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:40:55.712069 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:40:55.712776 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:40:55.713652 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:40:55.714672 | orchestrator | 2025-05-25 00:40:55.715718 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:40:55.715870 | orchestrator | 2025-05-25 00:40:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:40:55.715966 | orchestrator | 2025-05-25 00:40:55 | INFO  | Please wait and do not abort execution. 2025-05-25 00:40:55.717075 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:40:55.717560 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:40:55.718303 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:40:55.718767 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:40:55.719406 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:40:55.719724 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:40:55.720525 | orchestrator | 2025-05-25 00:40:55.720729 | orchestrator | Sunday 25 May 2025 00:40:55 +0000 (0:00:13.068) 0:00:13.237 ************ 2025-05-25 00:40:55.721641 | orchestrator | =============================================================================== 2025-05-25 00:40:55.721889 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.07s 2025-05-25 00:40:56.247677 | orchestrator | + osism apply hddtemp 2025-05-25 00:40:57.641502 | orchestrator | 2025-05-25 00:40:57 | INFO  | Task f3743a42-e608-44d7-8e9b-573bdc115890 (hddtemp) was prepared for execution. 2025-05-25 00:40:57.641606 | orchestrator | 2025-05-25 00:40:57 | INFO  | It takes a moment until task f3743a42-e608-44d7-8e9b-573bdc115890 (hddtemp) has been started and output is visible here. 2025-05-25 00:41:00.678185 | orchestrator | 2025-05-25 00:41:00.678291 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-25 00:41:00.682059 | orchestrator | 2025-05-25 00:41:00.682102 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-25 00:41:00.682116 | orchestrator | Sunday 25 May 2025 00:41:00 +0000 (0:00:00.195) 0:00:00.195 ************ 2025-05-25 00:41:00.820194 | orchestrator | ok: [testbed-manager] 2025-05-25 00:41:00.895057 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:41:00.969183 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:41:01.046525 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:41:01.125300 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:41:01.347543 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:41:01.348923 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:41:01.349199 | orchestrator | 2025-05-25 00:41:01.352764 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-25 00:41:01.352795 | orchestrator | Sunday 25 May 2025 00:41:01 +0000 (0:00:00.669) 0:00:00.865 ************ 2025-05-25 00:41:02.492609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:41:02.492745 | orchestrator | 2025-05-25 00:41:02.492761 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-25 00:41:02.492775 | orchestrator | Sunday 25 May 2025 00:41:02 +0000 (0:00:01.138) 0:00:02.003 ************ 2025-05-25 00:41:04.335707 | orchestrator | ok: [testbed-manager] 2025-05-25 00:41:04.336376 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:41:04.336746 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:41:04.337869 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:41:04.340383 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:41:04.340444 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:41:04.340498 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:41:04.341014 | orchestrator | 2025-05-25 00:41:04.341555 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-25 00:41:04.342093 | orchestrator | Sunday 25 May 2025 00:41:04 +0000 (0:00:01.852) 0:00:03.855 ************ 2025-05-25 00:41:04.961080 | orchestrator | changed: [testbed-manager] 2025-05-25 00:41:05.048865 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:41:05.465695 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:41:05.465869 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:41:05.466682 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:41:05.467338 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:41:05.468716 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:41:05.468786 | orchestrator | 2025-05-25 00:41:05.468856 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-25 00:41:05.469186 | orchestrator | Sunday 25 May 2025 00:41:05 +0000 (0:00:01.126) 0:00:04.982 ************ 2025-05-25 00:41:06.730745 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:41:06.736083 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:41:06.737506 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:41:06.737927 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:41:06.738268 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:41:06.738634 | orchestrator | ok: [testbed-manager] 2025-05-25 00:41:06.740899 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:41:06.741375 | orchestrator | 2025-05-25 00:41:06.743748 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-25 00:41:06.743775 | orchestrator | Sunday 25 May 2025 00:41:06 +0000 (0:00:01.259) 0:00:06.241 ************ 2025-05-25 00:41:07.032954 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:41:07.114539 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:41:07.197444 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:41:07.280192 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:41:07.400214 | orchestrator | changed: [testbed-manager] 2025-05-25 00:41:07.400304 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:41:07.400953 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:41:07.401355 | orchestrator | 2025-05-25 00:41:07.403174 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-25 00:41:07.403205 | orchestrator | Sunday 25 May 2025 00:41:07 +0000 (0:00:00.677) 0:00:06.919 ************ 2025-05-25 00:41:19.299396 | orchestrator | changed: [testbed-manager] 2025-05-25 00:41:19.299606 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:41:19.299624 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:41:19.299706 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:41:19.301350 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:41:19.302212 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:41:19.302977 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:41:19.303706 | orchestrator | 2025-05-25 00:41:19.304468 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-25 00:41:19.304964 | orchestrator | Sunday 25 May 2025 00:41:19 +0000 (0:00:11.890) 0:00:18.809 ************ 2025-05-25 00:41:20.450095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:41:20.450745 | orchestrator | 2025-05-25 00:41:20.452822 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-25 00:41:20.455977 | orchestrator | Sunday 25 May 2025 00:41:20 +0000 (0:00:01.156) 0:00:19.965 ************ 2025-05-25 00:41:22.239121 | orchestrator | changed: [testbed-manager] 2025-05-25 00:41:22.239980 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:41:22.241676 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:41:22.241873 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:41:22.243097 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:41:22.243491 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:41:22.244124 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:41:22.244973 | orchestrator | 2025-05-25 00:41:22.245672 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:41:22.246835 | orchestrator | 2025-05-25 00:41:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:41:22.246860 | orchestrator | 2025-05-25 00:41:22 | INFO  | Please wait and do not abort execution. 2025-05-25 00:41:22.247589 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:41:22.248842 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:41:22.249835 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:41:22.251004 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:41:22.251373 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:41:22.252208 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:41:22.252918 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:41:22.253544 | orchestrator | 2025-05-25 00:41:22.253997 | orchestrator | Sunday 25 May 2025 00:41:22 +0000 (0:00:01.792) 0:00:21.757 ************ 2025-05-25 00:41:22.254495 | orchestrator | =============================================================================== 2025-05-25 00:41:22.254825 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.89s 2025-05-25 00:41:22.255543 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.85s 2025-05-25 00:41:22.255917 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.79s 2025-05-25 00:41:22.256325 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.26s 2025-05-25 00:41:22.256634 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.16s 2025-05-25 00:41:22.256975 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.14s 2025-05-25 00:41:22.257371 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.13s 2025-05-25 00:41:22.257795 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.68s 2025-05-25 00:41:22.258081 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.67s 2025-05-25 00:41:22.772006 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-25 00:41:24.311928 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-25 00:41:24.312816 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-25 00:41:24.312898 | orchestrator | + local max_attempts=60 2025-05-25 00:41:24.312913 | orchestrator | + local name=ceph-ansible 2025-05-25 00:41:24.312924 | orchestrator | + local attempt_num=1 2025-05-25 00:41:24.312935 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-25 00:41:24.344705 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-25 00:41:24.344818 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-25 00:41:24.344857 | orchestrator | + local max_attempts=60 2025-05-25 00:41:24.344869 | orchestrator | + local name=kolla-ansible 2025-05-25 00:41:24.344881 | orchestrator | + local attempt_num=1 2025-05-25 00:41:24.344963 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-25 00:41:24.369951 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-25 00:41:24.370011 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-25 00:41:24.370139 | orchestrator | + local max_attempts=60 2025-05-25 00:41:24.370180 | orchestrator | + local name=osism-ansible 2025-05-25 00:41:24.370200 | orchestrator | + local attempt_num=1 2025-05-25 00:41:24.370300 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-25 00:41:24.398833 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-25 00:41:24.398887 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-25 00:41:24.398900 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-25 00:41:24.568246 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-25 00:41:24.722793 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-25 00:41:24.880250 | orchestrator | ARA in osism-ansible already disabled. 2025-05-25 00:41:25.037100 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-25 00:41:25.037574 | orchestrator | + osism apply gather-facts 2025-05-25 00:41:26.491601 | orchestrator | 2025-05-25 00:41:26 | INFO  | Task abdcc498-30e8-4d70-996f-248e82bbce0b (gather-facts) was prepared for execution. 2025-05-25 00:41:26.491705 | orchestrator | 2025-05-25 00:41:26 | INFO  | It takes a moment until task abdcc498-30e8-4d70-996f-248e82bbce0b (gather-facts) has been started and output is visible here. 2025-05-25 00:41:29.502970 | orchestrator | 2025-05-25 00:41:29.503983 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-25 00:41:29.507580 | orchestrator | 2025-05-25 00:41:29.508787 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-25 00:41:29.508915 | orchestrator | Sunday 25 May 2025 00:41:29 +0000 (0:00:00.158) 0:00:00.158 ************ 2025-05-25 00:41:34.199846 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:41:34.200452 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:41:34.202132 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:41:34.202781 | orchestrator | ok: [testbed-manager] 2025-05-25 00:41:34.205753 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:41:34.205780 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:41:34.205800 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:41:34.205815 | orchestrator | 2025-05-25 00:41:34.206077 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-25 00:41:34.207058 | orchestrator | 2025-05-25 00:41:34.207867 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-25 00:41:34.208297 | orchestrator | Sunday 25 May 2025 00:41:34 +0000 (0:00:04.700) 0:00:04.859 ************ 2025-05-25 00:41:34.348165 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:41:34.419078 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:41:34.494218 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:41:34.568925 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:41:34.641680 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:41:34.679852 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:41:34.680000 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:41:34.681120 | orchestrator | 2025-05-25 00:41:34.682275 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:41:34.682575 | orchestrator | 2025-05-25 00:41:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:41:34.682600 | orchestrator | 2025-05-25 00:41:34 | INFO  | Please wait and do not abort execution. 2025-05-25 00:41:34.684442 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:41:34.684935 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:41:34.685363 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:41:34.685847 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:41:34.686362 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:41:34.687052 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:41:34.687513 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:41:34.687950 | orchestrator | 2025-05-25 00:41:34.688822 | orchestrator | Sunday 25 May 2025 00:41:34 +0000 (0:00:00.480) 0:00:05.339 ************ 2025-05-25 00:41:34.689954 | orchestrator | =============================================================================== 2025-05-25 00:41:34.690094 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.70s 2025-05-25 00:41:34.690312 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2025-05-25 00:41:35.191554 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-25 00:41:35.207826 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-25 00:41:35.218647 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-25 00:41:35.236643 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-25 00:41:35.256310 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-25 00:41:35.274704 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-25 00:41:35.295321 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-25 00:41:35.314843 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-25 00:41:35.332756 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-25 00:41:35.350170 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-25 00:41:35.368654 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-25 00:41:35.387806 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-25 00:41:35.403163 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-25 00:41:35.422264 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-25 00:41:35.442089 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-25 00:41:35.463376 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-25 00:41:35.481934 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-25 00:41:35.501972 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-25 00:41:35.518094 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-25 00:41:35.535802 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-25 00:41:35.557737 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-25 00:41:35.845040 | orchestrator | ok: Runtime: 0:25:04.662691 2025-05-25 00:41:35.947543 | 2025-05-25 00:41:35.947715 | TASK [Deploy services] 2025-05-25 00:41:36.480589 | orchestrator | skipping: Conditional result was False 2025-05-25 00:41:36.495838 | 2025-05-25 00:41:36.495980 | TASK [Deploy in a nutshell] 2025-05-25 00:41:37.168215 | orchestrator | + set -e 2025-05-25 00:41:37.168488 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-25 00:41:37.168533 | orchestrator | ++ export INTERACTIVE=false 2025-05-25 00:41:37.168568 | orchestrator | ++ INTERACTIVE=false 2025-05-25 00:41:37.168590 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-25 00:41:37.168614 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-25 00:41:37.168655 | orchestrator | + source /opt/manager-vars.sh 2025-05-25 00:41:37.168713 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-25 00:41:37.168743 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-25 00:41:37.168758 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-25 00:41:37.168774 | orchestrator | ++ CEPH_VERSION=reef 2025-05-25 00:41:37.168786 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-25 00:41:37.168805 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-25 00:41:37.168816 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-25 00:41:37.168837 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-25 00:41:37.168848 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-25 00:41:37.168862 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-25 00:41:37.168873 | orchestrator | ++ export ARA=false 2025-05-25 00:41:37.168885 | orchestrator | ++ ARA=false 2025-05-25 00:41:37.168896 | orchestrator | ++ export TEMPEST=false 2025-05-25 00:41:37.168912 | orchestrator | ++ TEMPEST=false 2025-05-25 00:41:37.168923 | orchestrator | ++ export IS_ZUUL=true 2025-05-25 00:41:37.168934 | orchestrator | ++ IS_ZUUL=true 2025-05-25 00:41:37.168945 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.93 2025-05-25 00:41:37.168956 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.93 2025-05-25 00:41:37.168967 | orchestrator | ++ export EXTERNAL_API=false 2025-05-25 00:41:37.168978 | orchestrator | ++ EXTERNAL_API=false 2025-05-25 00:41:37.168988 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-25 00:41:37.168999 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-25 00:41:37.169010 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-25 00:41:37.169021 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-25 00:41:37.169032 | orchestrator | 2025-05-25 00:41:37.169043 | orchestrator | # PULL IMAGES 2025-05-25 00:41:37.169055 | orchestrator | 2025-05-25 00:41:37.169065 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-25 00:41:37.169076 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-25 00:41:37.169087 | orchestrator | + echo 2025-05-25 00:41:37.169098 | orchestrator | + echo '# PULL IMAGES' 2025-05-25 00:41:37.169110 | orchestrator | + echo 2025-05-25 00:41:37.170262 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-25 00:41:37.231450 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-25 00:41:37.231548 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-25 00:41:38.631804 | orchestrator | 2025-05-25 00:41:38 | INFO  | Trying to run play pull-images in environment custom 2025-05-25 00:41:38.678218 | orchestrator | 2025-05-25 00:41:38 | INFO  | Task 3d979fd6-a6c0-4bf1-baa4-7b281f339591 (pull-images) was prepared for execution. 2025-05-25 00:41:38.678286 | orchestrator | 2025-05-25 00:41:38 | INFO  | It takes a moment until task 3d979fd6-a6c0-4bf1-baa4-7b281f339591 (pull-images) has been started and output is visible here. 2025-05-25 00:41:41.727487 | orchestrator | 2025-05-25 00:41:41.727618 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-25 00:41:41.728588 | orchestrator | 2025-05-25 00:41:41.729357 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-25 00:41:41.729971 | orchestrator | Sunday 25 May 2025 00:41:41 +0000 (0:00:00.137) 0:00:00.137 ************ 2025-05-25 00:42:19.964952 | orchestrator | changed: [testbed-manager] 2025-05-25 00:42:19.965084 | orchestrator | 2025-05-25 00:42:19.965102 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-25 00:42:19.965115 | orchestrator | Sunday 25 May 2025 00:42:19 +0000 (0:00:38.235) 0:00:38.372 ************ 2025-05-25 00:43:06.526784 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-25 00:43:06.526939 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-25 00:43:06.526970 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-25 00:43:06.527564 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-25 00:43:06.528888 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-25 00:43:06.529626 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-25 00:43:06.530437 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-25 00:43:06.530993 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-25 00:43:06.531978 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-25 00:43:06.532509 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-25 00:43:06.534731 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-25 00:43:06.534763 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-25 00:43:06.534777 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-25 00:43:06.534788 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-25 00:43:06.534799 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-25 00:43:06.535174 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-25 00:43:06.535351 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-25 00:43:06.535972 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-25 00:43:06.536508 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-25 00:43:06.536806 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-25 00:43:06.537131 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-25 00:43:06.537652 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-25 00:43:06.537852 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-25 00:43:06.538407 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-25 00:43:06.538815 | orchestrator | 2025-05-25 00:43:06.540089 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:43:06.540112 | orchestrator | 2025-05-25 00:43:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:43:06.540125 | orchestrator | 2025-05-25 00:43:06 | INFO  | Please wait and do not abort execution. 2025-05-25 00:43:06.540332 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:43:06.540762 | orchestrator | 2025-05-25 00:43:06.541165 | orchestrator | Sunday 25 May 2025 00:43:06 +0000 (0:00:46.563) 0:01:24.936 ************ 2025-05-25 00:43:06.541728 | orchestrator | =============================================================================== 2025-05-25 00:43:06.542386 | orchestrator | Pull other images ------------------------------------------------------ 46.56s 2025-05-25 00:43:06.542953 | orchestrator | Pull keystone image ---------------------------------------------------- 38.24s 2025-05-25 00:43:08.577246 | orchestrator | 2025-05-25 00:43:08 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-25 00:43:08.626010 | orchestrator | 2025-05-25 00:43:08 | INFO  | Task b0c111ef-4936-4ae3-a3bf-79a5e18c7c73 (wipe-partitions) was prepared for execution. 2025-05-25 00:43:08.626150 | orchestrator | 2025-05-25 00:43:08 | INFO  | It takes a moment until task b0c111ef-4936-4ae3-a3bf-79a5e18c7c73 (wipe-partitions) has been started and output is visible here. 2025-05-25 00:43:11.736195 | orchestrator | 2025-05-25 00:43:11.738223 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-25 00:43:11.738260 | orchestrator | 2025-05-25 00:43:11.742181 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-25 00:43:11.742515 | orchestrator | Sunday 25 May 2025 00:43:11 +0000 (0:00:00.121) 0:00:00.121 ************ 2025-05-25 00:43:12.289358 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:43:12.290253 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:43:12.291549 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:43:12.292947 | orchestrator | 2025-05-25 00:43:12.293412 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-25 00:43:12.294060 | orchestrator | Sunday 25 May 2025 00:43:12 +0000 (0:00:00.551) 0:00:00.673 ************ 2025-05-25 00:43:12.479605 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:12.562573 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:12.562770 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:43:12.563506 | orchestrator | 2025-05-25 00:43:12.564100 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-25 00:43:12.564898 | orchestrator | Sunday 25 May 2025 00:43:12 +0000 (0:00:00.277) 0:00:00.950 ************ 2025-05-25 00:43:13.284173 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:43:13.284278 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:43:13.286678 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:43:13.288911 | orchestrator | 2025-05-25 00:43:13.288957 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-25 00:43:13.288971 | orchestrator | Sunday 25 May 2025 00:43:13 +0000 (0:00:00.716) 0:00:01.667 ************ 2025-05-25 00:43:13.432709 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:13.525212 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:13.527647 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:43:13.527688 | orchestrator | 2025-05-25 00:43:13.527704 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-25 00:43:13.529114 | orchestrator | Sunday 25 May 2025 00:43:13 +0000 (0:00:00.241) 0:00:01.908 ************ 2025-05-25 00:43:14.706402 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-25 00:43:14.706515 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-25 00:43:14.706601 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-25 00:43:14.706950 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-25 00:43:14.707340 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-25 00:43:14.707874 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-25 00:43:14.708212 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-25 00:43:14.708561 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-25 00:43:14.709002 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-25 00:43:14.709517 | orchestrator | 2025-05-25 00:43:14.711003 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-25 00:43:14.711065 | orchestrator | Sunday 25 May 2025 00:43:14 +0000 (0:00:01.184) 0:00:03.092 ************ 2025-05-25 00:43:16.020225 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-25 00:43:16.021017 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-25 00:43:16.022226 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-25 00:43:16.022904 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-25 00:43:16.023481 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-25 00:43:16.024183 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-25 00:43:16.025522 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-25 00:43:16.028969 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-25 00:43:16.029503 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-25 00:43:16.030687 | orchestrator | 2025-05-25 00:43:16.030861 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-25 00:43:16.034397 | orchestrator | Sunday 25 May 2025 00:43:16 +0000 (0:00:01.315) 0:00:04.408 ************ 2025-05-25 00:43:18.151860 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-25 00:43:18.151968 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-25 00:43:18.155879 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-25 00:43:18.155971 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-25 00:43:18.155986 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-25 00:43:18.155998 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-25 00:43:18.156010 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-25 00:43:18.156021 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-25 00:43:18.156183 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-25 00:43:18.156702 | orchestrator | 2025-05-25 00:43:18.157116 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-25 00:43:18.157681 | orchestrator | Sunday 25 May 2025 00:43:18 +0000 (0:00:02.126) 0:00:06.534 ************ 2025-05-25 00:43:18.747422 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:43:18.747866 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:43:18.748459 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:43:18.749833 | orchestrator | 2025-05-25 00:43:18.750214 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-25 00:43:18.750641 | orchestrator | Sunday 25 May 2025 00:43:18 +0000 (0:00:00.598) 0:00:07.133 ************ 2025-05-25 00:43:19.354422 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:43:19.355063 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:43:19.356063 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:43:19.357043 | orchestrator | 2025-05-25 00:43:19.359134 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:43:19.359188 | orchestrator | 2025-05-25 00:43:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:43:19.359203 | orchestrator | 2025-05-25 00:43:19 | INFO  | Please wait and do not abort execution. 2025-05-25 00:43:19.361346 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:43:19.361476 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:43:19.361781 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:43:19.362454 | orchestrator | 2025-05-25 00:43:19.363058 | orchestrator | Sunday 25 May 2025 00:43:19 +0000 (0:00:00.608) 0:00:07.741 ************ 2025-05-25 00:43:19.363582 | orchestrator | =============================================================================== 2025-05-25 00:43:19.363933 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2025-05-25 00:43:19.365895 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.32s 2025-05-25 00:43:19.366084 | orchestrator | Check device availability ----------------------------------------------- 1.18s 2025-05-25 00:43:19.366552 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.72s 2025-05-25 00:43:19.367012 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-05-25 00:43:19.367279 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2025-05-25 00:43:19.367895 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.55s 2025-05-25 00:43:19.370181 | orchestrator | Remove all rook related logical devices --------------------------------- 0.28s 2025-05-25 00:43:19.370460 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2025-05-25 00:43:21.341545 | orchestrator | 2025-05-25 00:43:21 | INFO  | Task af1dee38-d250-40a7-a3d4-d2dded86c7d0 (facts) was prepared for execution. 2025-05-25 00:43:21.341649 | orchestrator | 2025-05-25 00:43:21 | INFO  | It takes a moment until task af1dee38-d250-40a7-a3d4-d2dded86c7d0 (facts) has been started and output is visible here. 2025-05-25 00:43:24.263875 | orchestrator | 2025-05-25 00:43:24.263990 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-25 00:43:24.266110 | orchestrator | 2025-05-25 00:43:24.271426 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-25 00:43:24.271899 | orchestrator | Sunday 25 May 2025 00:43:24 +0000 (0:00:00.178) 0:00:00.178 ************ 2025-05-25 00:43:25.205515 | orchestrator | ok: [testbed-manager] 2025-05-25 00:43:25.205671 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:43:25.207692 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:43:25.210005 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:43:25.211928 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:43:25.212796 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:43:25.215729 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:43:25.216233 | orchestrator | 2025-05-25 00:43:25.217525 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-25 00:43:25.218553 | orchestrator | Sunday 25 May 2025 00:43:25 +0000 (0:00:00.943) 0:00:01.121 ************ 2025-05-25 00:43:25.367458 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:43:25.441294 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:43:25.513704 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:43:25.597603 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:43:25.664104 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:26.281161 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:26.282654 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:43:26.284995 | orchestrator | 2025-05-25 00:43:26.286793 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-25 00:43:26.288192 | orchestrator | 2025-05-25 00:43:26.288930 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-25 00:43:26.289785 | orchestrator | Sunday 25 May 2025 00:43:26 +0000 (0:00:01.082) 0:00:02.203 ************ 2025-05-25 00:43:30.555761 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:43:30.555836 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:43:30.558000 | orchestrator | ok: [testbed-manager] 2025-05-25 00:43:30.558507 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:43:30.558869 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:43:30.563518 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:43:30.563551 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:43:30.563558 | orchestrator | 2025-05-25 00:43:30.563566 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-25 00:43:30.563572 | orchestrator | 2025-05-25 00:43:30.563711 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-25 00:43:30.564044 | orchestrator | Sunday 25 May 2025 00:43:30 +0000 (0:00:04.272) 0:00:06.476 ************ 2025-05-25 00:43:30.852813 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:43:30.921647 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:43:30.993212 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:43:31.075644 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:43:31.152880 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:31.196269 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:31.196600 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:43:31.197458 | orchestrator | 2025-05-25 00:43:31.198122 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:43:31.198861 | orchestrator | 2025-05-25 00:43:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:43:31.199310 | orchestrator | 2025-05-25 00:43:31 | INFO  | Please wait and do not abort execution. 2025-05-25 00:43:31.200351 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:43:31.200889 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:43:31.201843 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:43:31.202460 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:43:31.203133 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:43:31.203184 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:43:31.203976 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:43:31.204146 | orchestrator | 2025-05-25 00:43:31.204746 | orchestrator | Sunday 25 May 2025 00:43:31 +0000 (0:00:00.642) 0:00:07.119 ************ 2025-05-25 00:43:31.205468 | orchestrator | =============================================================================== 2025-05-25 00:43:31.206682 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.27s 2025-05-25 00:43:31.207177 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2025-05-25 00:43:31.208171 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.94s 2025-05-25 00:43:31.209086 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.64s 2025-05-25 00:43:33.233603 | orchestrator | 2025-05-25 00:43:33 | INFO  | Task fce84fe7-b8b6-4035-8537-08cf65005fba (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-25 00:43:33.233684 | orchestrator | 2025-05-25 00:43:33 | INFO  | It takes a moment until task fce84fe7-b8b6-4035-8537-08cf65005fba (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-25 00:43:36.660495 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-25 00:43:37.223520 | orchestrator | 2025-05-25 00:43:37.225804 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-25 00:43:37.228002 | orchestrator | 2025-05-25 00:43:37.228049 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-25 00:43:37.230634 | orchestrator | Sunday 25 May 2025 00:43:37 +0000 (0:00:00.484) 0:00:00.484 ************ 2025-05-25 00:43:37.523971 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 00:43:37.526857 | orchestrator | 2025-05-25 00:43:37.526900 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-25 00:43:37.526916 | orchestrator | Sunday 25 May 2025 00:43:37 +0000 (0:00:00.302) 0:00:00.787 ************ 2025-05-25 00:43:37.766001 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:43:37.766221 | orchestrator | 2025-05-25 00:43:37.766635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:37.766996 | orchestrator | Sunday 25 May 2025 00:43:37 +0000 (0:00:00.244) 0:00:01.032 ************ 2025-05-25 00:43:38.395108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-25 00:43:38.395294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-25 00:43:38.395548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-25 00:43:38.396589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-25 00:43:38.396868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-25 00:43:38.397590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-25 00:43:38.401016 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-25 00:43:38.401830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-25 00:43:38.402692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-25 00:43:38.403998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-25 00:43:38.407186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-25 00:43:38.407694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-25 00:43:38.408195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-25 00:43:38.408876 | orchestrator | 2025-05-25 00:43:38.409716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:38.410117 | orchestrator | Sunday 25 May 2025 00:43:38 +0000 (0:00:00.628) 0:00:01.660 ************ 2025-05-25 00:43:38.610732 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:38.610824 | orchestrator | 2025-05-25 00:43:38.611585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:38.612621 | orchestrator | Sunday 25 May 2025 00:43:38 +0000 (0:00:00.212) 0:00:01.873 ************ 2025-05-25 00:43:38.843490 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:38.844773 | orchestrator | 2025-05-25 00:43:38.846185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:38.848087 | orchestrator | Sunday 25 May 2025 00:43:38 +0000 (0:00:00.236) 0:00:02.109 ************ 2025-05-25 00:43:39.038762 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:39.040602 | orchestrator | 2025-05-25 00:43:39.040938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:39.041008 | orchestrator | Sunday 25 May 2025 00:43:39 +0000 (0:00:00.192) 0:00:02.302 ************ 2025-05-25 00:43:39.318104 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:39.318217 | orchestrator | 2025-05-25 00:43:39.318688 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:39.318720 | orchestrator | Sunday 25 May 2025 00:43:39 +0000 (0:00:00.280) 0:00:02.582 ************ 2025-05-25 00:43:39.557153 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:39.557259 | orchestrator | 2025-05-25 00:43:39.557281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:39.558000 | orchestrator | Sunday 25 May 2025 00:43:39 +0000 (0:00:00.236) 0:00:02.818 ************ 2025-05-25 00:43:39.762463 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:39.763002 | orchestrator | 2025-05-25 00:43:39.763208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:39.763897 | orchestrator | Sunday 25 May 2025 00:43:39 +0000 (0:00:00.209) 0:00:03.028 ************ 2025-05-25 00:43:39.967394 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:39.967525 | orchestrator | 2025-05-25 00:43:39.967542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:39.967614 | orchestrator | Sunday 25 May 2025 00:43:39 +0000 (0:00:00.206) 0:00:03.234 ************ 2025-05-25 00:43:40.170127 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:40.170285 | orchestrator | 2025-05-25 00:43:40.170440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:40.171618 | orchestrator | Sunday 25 May 2025 00:43:40 +0000 (0:00:00.202) 0:00:03.436 ************ 2025-05-25 00:43:40.814676 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b) 2025-05-25 00:43:40.815174 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b) 2025-05-25 00:43:40.815543 | orchestrator | 2025-05-25 00:43:40.815617 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:40.816769 | orchestrator | Sunday 25 May 2025 00:43:40 +0000 (0:00:00.640) 0:00:04.076 ************ 2025-05-25 00:43:41.614542 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b4cdb2bf-93fc-4f18-bc4f-5ab68c384bd6) 2025-05-25 00:43:41.614715 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b4cdb2bf-93fc-4f18-bc4f-5ab68c384bd6) 2025-05-25 00:43:41.615569 | orchestrator | 2025-05-25 00:43:41.615676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:41.616127 | orchestrator | Sunday 25 May 2025 00:43:41 +0000 (0:00:00.802) 0:00:04.879 ************ 2025-05-25 00:43:42.053452 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5d6b2858-a2bf-4730-a36e-7c509d6038b8) 2025-05-25 00:43:42.055154 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5d6b2858-a2bf-4730-a36e-7c509d6038b8) 2025-05-25 00:43:42.055194 | orchestrator | 2025-05-25 00:43:42.055211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:42.055224 | orchestrator | Sunday 25 May 2025 00:43:42 +0000 (0:00:00.439) 0:00:05.319 ************ 2025-05-25 00:43:42.513694 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f90c35ea-44f5-4677-8ded-e7e6ddf8d55d) 2025-05-25 00:43:42.514086 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f90c35ea-44f5-4677-8ded-e7e6ddf8d55d) 2025-05-25 00:43:42.514569 | orchestrator | 2025-05-25 00:43:42.515135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:42.516209 | orchestrator | Sunday 25 May 2025 00:43:42 +0000 (0:00:00.460) 0:00:05.779 ************ 2025-05-25 00:43:42.835898 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-25 00:43:42.836058 | orchestrator | 2025-05-25 00:43:42.836915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:42.837253 | orchestrator | Sunday 25 May 2025 00:43:42 +0000 (0:00:00.322) 0:00:06.101 ************ 2025-05-25 00:43:43.250175 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-25 00:43:43.250621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-25 00:43:43.251819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-25 00:43:43.255603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-25 00:43:43.256804 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-25 00:43:43.258090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-25 00:43:43.258765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-25 00:43:43.260469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-25 00:43:43.261429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-25 00:43:43.261815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-25 00:43:43.263166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-25 00:43:43.264580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-25 00:43:43.265855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-25 00:43:43.267152 | orchestrator | 2025-05-25 00:43:43.267813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:43.268183 | orchestrator | Sunday 25 May 2025 00:43:43 +0000 (0:00:00.414) 0:00:06.516 ************ 2025-05-25 00:43:43.462142 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:43.462469 | orchestrator | 2025-05-25 00:43:43.464107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:43.464972 | orchestrator | Sunday 25 May 2025 00:43:43 +0000 (0:00:00.211) 0:00:06.727 ************ 2025-05-25 00:43:43.652394 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:43.653090 | orchestrator | 2025-05-25 00:43:43.654457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:43.658090 | orchestrator | Sunday 25 May 2025 00:43:43 +0000 (0:00:00.191) 0:00:06.918 ************ 2025-05-25 00:43:43.857593 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:43.857984 | orchestrator | 2025-05-25 00:43:43.859036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:43.861209 | orchestrator | Sunday 25 May 2025 00:43:43 +0000 (0:00:00.204) 0:00:07.123 ************ 2025-05-25 00:43:44.030612 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:44.030818 | orchestrator | 2025-05-25 00:43:44.032330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:44.033386 | orchestrator | Sunday 25 May 2025 00:43:44 +0000 (0:00:00.173) 0:00:07.297 ************ 2025-05-25 00:43:44.407303 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:44.408236 | orchestrator | 2025-05-25 00:43:44.409937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:44.412506 | orchestrator | Sunday 25 May 2025 00:43:44 +0000 (0:00:00.377) 0:00:07.674 ************ 2025-05-25 00:43:44.592334 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:44.592445 | orchestrator | 2025-05-25 00:43:44.592456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:44.595019 | orchestrator | Sunday 25 May 2025 00:43:44 +0000 (0:00:00.180) 0:00:07.854 ************ 2025-05-25 00:43:44.842488 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:44.842820 | orchestrator | 2025-05-25 00:43:44.844065 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:44.844098 | orchestrator | Sunday 25 May 2025 00:43:44 +0000 (0:00:00.254) 0:00:08.109 ************ 2025-05-25 00:43:45.055989 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:45.056091 | orchestrator | 2025-05-25 00:43:45.056749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:45.057728 | orchestrator | Sunday 25 May 2025 00:43:45 +0000 (0:00:00.207) 0:00:08.317 ************ 2025-05-25 00:43:45.667855 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-25 00:43:45.669557 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-25 00:43:45.673889 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-25 00:43:45.674876 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-25 00:43:45.675987 | orchestrator | 2025-05-25 00:43:45.677092 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:45.678141 | orchestrator | Sunday 25 May 2025 00:43:45 +0000 (0:00:00.616) 0:00:08.934 ************ 2025-05-25 00:43:45.879457 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:45.883044 | orchestrator | 2025-05-25 00:43:45.883706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:45.884437 | orchestrator | Sunday 25 May 2025 00:43:45 +0000 (0:00:00.207) 0:00:09.142 ************ 2025-05-25 00:43:46.076728 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:46.077274 | orchestrator | 2025-05-25 00:43:46.079622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:46.079665 | orchestrator | Sunday 25 May 2025 00:43:46 +0000 (0:00:00.200) 0:00:09.342 ************ 2025-05-25 00:43:46.276458 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:46.276557 | orchestrator | 2025-05-25 00:43:46.276677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:46.277940 | orchestrator | Sunday 25 May 2025 00:43:46 +0000 (0:00:00.200) 0:00:09.543 ************ 2025-05-25 00:43:46.470141 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:46.472432 | orchestrator | 2025-05-25 00:43:46.472551 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-25 00:43:46.472706 | orchestrator | Sunday 25 May 2025 00:43:46 +0000 (0:00:00.194) 0:00:09.738 ************ 2025-05-25 00:43:46.649300 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-25 00:43:46.650245 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-25 00:43:46.650663 | orchestrator | 2025-05-25 00:43:46.651003 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-25 00:43:46.651422 | orchestrator | Sunday 25 May 2025 00:43:46 +0000 (0:00:00.179) 0:00:09.917 ************ 2025-05-25 00:43:46.785149 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:46.787163 | orchestrator | 2025-05-25 00:43:46.787351 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-25 00:43:46.787574 | orchestrator | Sunday 25 May 2025 00:43:46 +0000 (0:00:00.130) 0:00:10.048 ************ 2025-05-25 00:43:47.096332 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:47.097031 | orchestrator | 2025-05-25 00:43:47.097068 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-25 00:43:47.097568 | orchestrator | Sunday 25 May 2025 00:43:47 +0000 (0:00:00.316) 0:00:10.364 ************ 2025-05-25 00:43:47.223928 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:47.224067 | orchestrator | 2025-05-25 00:43:47.224584 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-25 00:43:47.225119 | orchestrator | Sunday 25 May 2025 00:43:47 +0000 (0:00:00.127) 0:00:10.491 ************ 2025-05-25 00:43:47.366741 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:43:47.367180 | orchestrator | 2025-05-25 00:43:47.367486 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-25 00:43:47.369779 | orchestrator | Sunday 25 May 2025 00:43:47 +0000 (0:00:00.142) 0:00:10.634 ************ 2025-05-25 00:43:47.552541 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91dc6ac0-e554-5716-a575-6858f2de7d62'}}) 2025-05-25 00:43:47.555414 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'}}) 2025-05-25 00:43:47.558323 | orchestrator | 2025-05-25 00:43:47.558400 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-25 00:43:47.560629 | orchestrator | Sunday 25 May 2025 00:43:47 +0000 (0:00:00.186) 0:00:10.820 ************ 2025-05-25 00:43:47.700694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91dc6ac0-e554-5716-a575-6858f2de7d62'}})  2025-05-25 00:43:47.702771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'}})  2025-05-25 00:43:47.703508 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:47.703549 | orchestrator | 2025-05-25 00:43:47.703611 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-25 00:43:47.704633 | orchestrator | Sunday 25 May 2025 00:43:47 +0000 (0:00:00.147) 0:00:10.968 ************ 2025-05-25 00:43:47.870101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91dc6ac0-e554-5716-a575-6858f2de7d62'}})  2025-05-25 00:43:47.872840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'}})  2025-05-25 00:43:47.873259 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:47.873689 | orchestrator | 2025-05-25 00:43:47.874071 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-25 00:43:47.876038 | orchestrator | Sunday 25 May 2025 00:43:47 +0000 (0:00:00.169) 0:00:11.138 ************ 2025-05-25 00:43:48.024023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91dc6ac0-e554-5716-a575-6858f2de7d62'}})  2025-05-25 00:43:48.024977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'}})  2025-05-25 00:43:48.024999 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:48.027243 | orchestrator | 2025-05-25 00:43:48.027271 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-25 00:43:48.027348 | orchestrator | Sunday 25 May 2025 00:43:48 +0000 (0:00:00.154) 0:00:11.292 ************ 2025-05-25 00:43:48.164407 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:43:48.164497 | orchestrator | 2025-05-25 00:43:48.167076 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-25 00:43:48.167122 | orchestrator | Sunday 25 May 2025 00:43:48 +0000 (0:00:00.138) 0:00:11.430 ************ 2025-05-25 00:43:48.305266 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:43:48.307899 | orchestrator | 2025-05-25 00:43:48.308274 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-25 00:43:48.310337 | orchestrator | Sunday 25 May 2025 00:43:48 +0000 (0:00:00.139) 0:00:11.570 ************ 2025-05-25 00:43:48.434475 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:48.435730 | orchestrator | 2025-05-25 00:43:48.435751 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-25 00:43:48.435986 | orchestrator | Sunday 25 May 2025 00:43:48 +0000 (0:00:00.132) 0:00:11.702 ************ 2025-05-25 00:43:48.572316 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:48.574261 | orchestrator | 2025-05-25 00:43:48.574479 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-25 00:43:48.574814 | orchestrator | Sunday 25 May 2025 00:43:48 +0000 (0:00:00.137) 0:00:11.839 ************ 2025-05-25 00:43:48.721582 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:48.727959 | orchestrator | 2025-05-25 00:43:48.728015 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-25 00:43:48.728030 | orchestrator | Sunday 25 May 2025 00:43:48 +0000 (0:00:00.144) 0:00:11.984 ************ 2025-05-25 00:43:49.017909 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 00:43:49.018130 | orchestrator |  "ceph_osd_devices": { 2025-05-25 00:43:49.019598 | orchestrator |  "sdb": { 2025-05-25 00:43:49.020063 | orchestrator |  "osd_lvm_uuid": "91dc6ac0-e554-5716-a575-6858f2de7d62" 2025-05-25 00:43:49.022091 | orchestrator |  }, 2025-05-25 00:43:49.024934 | orchestrator |  "sdc": { 2025-05-25 00:43:49.025256 | orchestrator |  "osd_lvm_uuid": "a344b0dc-179a-5809-8fe1-9e4cbc2dd42d" 2025-05-25 00:43:49.026545 | orchestrator |  } 2025-05-25 00:43:49.026807 | orchestrator |  } 2025-05-25 00:43:49.026977 | orchestrator | } 2025-05-25 00:43:49.027123 | orchestrator | 2025-05-25 00:43:49.027520 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-25 00:43:49.027692 | orchestrator | Sunday 25 May 2025 00:43:49 +0000 (0:00:00.300) 0:00:12.284 ************ 2025-05-25 00:43:49.145577 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:49.146285 | orchestrator | 2025-05-25 00:43:49.146543 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-25 00:43:49.146627 | orchestrator | Sunday 25 May 2025 00:43:49 +0000 (0:00:00.129) 0:00:12.414 ************ 2025-05-25 00:43:49.255936 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:49.256043 | orchestrator | 2025-05-25 00:43:49.256758 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-25 00:43:49.256858 | orchestrator | Sunday 25 May 2025 00:43:49 +0000 (0:00:00.110) 0:00:12.524 ************ 2025-05-25 00:43:49.384587 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:43:49.384687 | orchestrator | 2025-05-25 00:43:49.384702 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-25 00:43:49.385605 | orchestrator | Sunday 25 May 2025 00:43:49 +0000 (0:00:00.128) 0:00:12.652 ************ 2025-05-25 00:43:49.631273 | orchestrator | changed: [testbed-node-3] => { 2025-05-25 00:43:49.631430 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-25 00:43:49.632109 | orchestrator |  "ceph_osd_devices": { 2025-05-25 00:43:49.632189 | orchestrator |  "sdb": { 2025-05-25 00:43:49.634464 | orchestrator |  "osd_lvm_uuid": "91dc6ac0-e554-5716-a575-6858f2de7d62" 2025-05-25 00:43:49.634607 | orchestrator |  }, 2025-05-25 00:43:49.634678 | orchestrator |  "sdc": { 2025-05-25 00:43:49.634854 | orchestrator |  "osd_lvm_uuid": "a344b0dc-179a-5809-8fe1-9e4cbc2dd42d" 2025-05-25 00:43:49.635423 | orchestrator |  } 2025-05-25 00:43:49.635520 | orchestrator |  }, 2025-05-25 00:43:49.635601 | orchestrator |  "lvm_volumes": [ 2025-05-25 00:43:49.635763 | orchestrator |  { 2025-05-25 00:43:49.636146 | orchestrator |  "data": "osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62", 2025-05-25 00:43:49.636192 | orchestrator |  "data_vg": "ceph-91dc6ac0-e554-5716-a575-6858f2de7d62" 2025-05-25 00:43:49.636453 | orchestrator |  }, 2025-05-25 00:43:49.636757 | orchestrator |  { 2025-05-25 00:43:49.636845 | orchestrator |  "data": "osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d", 2025-05-25 00:43:49.637086 | orchestrator |  "data_vg": "ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d" 2025-05-25 00:43:49.637217 | orchestrator |  } 2025-05-25 00:43:49.637476 | orchestrator |  ] 2025-05-25 00:43:49.637724 | orchestrator |  } 2025-05-25 00:43:49.637921 | orchestrator | } 2025-05-25 00:43:49.638116 | orchestrator | 2025-05-25 00:43:49.638297 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-25 00:43:49.638523 | orchestrator | Sunday 25 May 2025 00:43:49 +0000 (0:00:00.246) 0:00:12.899 ************ 2025-05-25 00:43:51.466997 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 00:43:51.467165 | orchestrator | 2025-05-25 00:43:51.471539 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-25 00:43:51.472550 | orchestrator | 2025-05-25 00:43:51.472749 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-25 00:43:51.473142 | orchestrator | Sunday 25 May 2025 00:43:51 +0000 (0:00:01.835) 0:00:14.735 ************ 2025-05-25 00:43:51.706163 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-25 00:43:51.706294 | orchestrator | 2025-05-25 00:43:51.706427 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-25 00:43:51.706447 | orchestrator | Sunday 25 May 2025 00:43:51 +0000 (0:00:00.235) 0:00:14.971 ************ 2025-05-25 00:43:51.882375 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:43:51.885633 | orchestrator | 2025-05-25 00:43:51.885670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:51.885901 | orchestrator | Sunday 25 May 2025 00:43:51 +0000 (0:00:00.178) 0:00:15.150 ************ 2025-05-25 00:43:52.197685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-25 00:43:52.197791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-25 00:43:52.199510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-25 00:43:52.201430 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-25 00:43:52.202593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-25 00:43:52.203900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-25 00:43:52.204771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-25 00:43:52.206147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-25 00:43:52.206680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-25 00:43:52.207203 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-25 00:43:52.207815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-25 00:43:52.208516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-25 00:43:52.209671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-25 00:43:52.210144 | orchestrator | 2025-05-25 00:43:52.211009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:52.211497 | orchestrator | Sunday 25 May 2025 00:43:52 +0000 (0:00:00.312) 0:00:15.462 ************ 2025-05-25 00:43:52.372608 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:52.372819 | orchestrator | 2025-05-25 00:43:52.373223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:52.374502 | orchestrator | Sunday 25 May 2025 00:43:52 +0000 (0:00:00.177) 0:00:15.640 ************ 2025-05-25 00:43:52.560128 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:52.561436 | orchestrator | 2025-05-25 00:43:52.563806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:52.563830 | orchestrator | Sunday 25 May 2025 00:43:52 +0000 (0:00:00.186) 0:00:15.827 ************ 2025-05-25 00:43:52.742648 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:52.742741 | orchestrator | 2025-05-25 00:43:52.742823 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:52.743848 | orchestrator | Sunday 25 May 2025 00:43:52 +0000 (0:00:00.181) 0:00:16.008 ************ 2025-05-25 00:43:52.920429 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:52.923134 | orchestrator | 2025-05-25 00:43:52.923350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:52.923612 | orchestrator | Sunday 25 May 2025 00:43:52 +0000 (0:00:00.177) 0:00:16.186 ************ 2025-05-25 00:43:53.352006 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:53.353510 | orchestrator | 2025-05-25 00:43:53.354175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:53.355641 | orchestrator | Sunday 25 May 2025 00:43:53 +0000 (0:00:00.431) 0:00:16.618 ************ 2025-05-25 00:43:53.543698 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:53.544135 | orchestrator | 2025-05-25 00:43:53.544580 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:53.545166 | orchestrator | Sunday 25 May 2025 00:43:53 +0000 (0:00:00.191) 0:00:16.810 ************ 2025-05-25 00:43:53.717584 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:53.717804 | orchestrator | 2025-05-25 00:43:53.718429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:53.719104 | orchestrator | Sunday 25 May 2025 00:43:53 +0000 (0:00:00.174) 0:00:16.985 ************ 2025-05-25 00:43:53.910141 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:53.911278 | orchestrator | 2025-05-25 00:43:53.911306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:53.911500 | orchestrator | Sunday 25 May 2025 00:43:53 +0000 (0:00:00.191) 0:00:17.176 ************ 2025-05-25 00:43:54.304343 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357) 2025-05-25 00:43:54.305121 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357) 2025-05-25 00:43:54.305197 | orchestrator | 2025-05-25 00:43:54.305587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:54.306520 | orchestrator | Sunday 25 May 2025 00:43:54 +0000 (0:00:00.392) 0:00:17.569 ************ 2025-05-25 00:43:54.672076 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a7a2bb5e-544e-42c6-9dad-0ece7cbc632c) 2025-05-25 00:43:54.672176 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a7a2bb5e-544e-42c6-9dad-0ece7cbc632c) 2025-05-25 00:43:54.673687 | orchestrator | 2025-05-25 00:43:54.673719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:54.673733 | orchestrator | Sunday 25 May 2025 00:43:54 +0000 (0:00:00.370) 0:00:17.940 ************ 2025-05-25 00:43:55.077112 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_45989edd-037d-47c1-af48-ae55f96e814d) 2025-05-25 00:43:55.078115 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_45989edd-037d-47c1-af48-ae55f96e814d) 2025-05-25 00:43:55.079893 | orchestrator | 2025-05-25 00:43:55.081082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:55.081625 | orchestrator | Sunday 25 May 2025 00:43:55 +0000 (0:00:00.402) 0:00:18.342 ************ 2025-05-25 00:43:55.483765 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_00903628-efdf-425a-bac1-d89af04936e9) 2025-05-25 00:43:55.483973 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_00903628-efdf-425a-bac1-d89af04936e9) 2025-05-25 00:43:55.485113 | orchestrator | 2025-05-25 00:43:55.485217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:43:55.485925 | orchestrator | Sunday 25 May 2025 00:43:55 +0000 (0:00:00.407) 0:00:18.750 ************ 2025-05-25 00:43:55.811201 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-25 00:43:55.812072 | orchestrator | 2025-05-25 00:43:55.812418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:55.814136 | orchestrator | Sunday 25 May 2025 00:43:55 +0000 (0:00:00.328) 0:00:19.078 ************ 2025-05-25 00:43:56.335532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-25 00:43:56.335761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-25 00:43:56.338291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-25 00:43:56.339496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-25 00:43:56.339525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-25 00:43:56.340038 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-25 00:43:56.340682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-25 00:43:56.341136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-25 00:43:56.342521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-25 00:43:56.342782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-25 00:43:56.342985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-25 00:43:56.343485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-25 00:43:56.343753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-25 00:43:56.344000 | orchestrator | 2025-05-25 00:43:56.344880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:56.345065 | orchestrator | Sunday 25 May 2025 00:43:56 +0000 (0:00:00.524) 0:00:19.602 ************ 2025-05-25 00:43:56.531681 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:56.532722 | orchestrator | 2025-05-25 00:43:56.533103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:56.533753 | orchestrator | Sunday 25 May 2025 00:43:56 +0000 (0:00:00.197) 0:00:19.799 ************ 2025-05-25 00:43:56.709906 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:56.710002 | orchestrator | 2025-05-25 00:43:56.710060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:56.710162 | orchestrator | Sunday 25 May 2025 00:43:56 +0000 (0:00:00.177) 0:00:19.977 ************ 2025-05-25 00:43:56.886933 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:56.887115 | orchestrator | 2025-05-25 00:43:56.887766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:56.887906 | orchestrator | Sunday 25 May 2025 00:43:56 +0000 (0:00:00.176) 0:00:20.153 ************ 2025-05-25 00:43:57.078333 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:57.078604 | orchestrator | 2025-05-25 00:43:57.078629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:57.078642 | orchestrator | Sunday 25 May 2025 00:43:57 +0000 (0:00:00.191) 0:00:20.345 ************ 2025-05-25 00:43:57.256224 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:57.256504 | orchestrator | 2025-05-25 00:43:57.256665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:57.257984 | orchestrator | Sunday 25 May 2025 00:43:57 +0000 (0:00:00.178) 0:00:20.524 ************ 2025-05-25 00:43:57.426555 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:57.426938 | orchestrator | 2025-05-25 00:43:57.426971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:57.428898 | orchestrator | Sunday 25 May 2025 00:43:57 +0000 (0:00:00.167) 0:00:20.691 ************ 2025-05-25 00:43:57.577820 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:57.577922 | orchestrator | 2025-05-25 00:43:57.577938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:57.578539 | orchestrator | Sunday 25 May 2025 00:43:57 +0000 (0:00:00.149) 0:00:20.841 ************ 2025-05-25 00:43:57.750989 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:57.751503 | orchestrator | 2025-05-25 00:43:57.751951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:57.752412 | orchestrator | Sunday 25 May 2025 00:43:57 +0000 (0:00:00.176) 0:00:21.018 ************ 2025-05-25 00:43:58.443976 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-25 00:43:58.445933 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-25 00:43:58.446874 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-25 00:43:58.449900 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-25 00:43:58.450476 | orchestrator | 2025-05-25 00:43:58.451156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:58.451812 | orchestrator | Sunday 25 May 2025 00:43:58 +0000 (0:00:00.691) 0:00:21.709 ************ 2025-05-25 00:43:58.621180 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:58.621324 | orchestrator | 2025-05-25 00:43:58.622098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:58.622174 | orchestrator | Sunday 25 May 2025 00:43:58 +0000 (0:00:00.180) 0:00:21.889 ************ 2025-05-25 00:43:59.075978 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:59.078479 | orchestrator | 2025-05-25 00:43:59.078508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:59.078514 | orchestrator | Sunday 25 May 2025 00:43:59 +0000 (0:00:00.452) 0:00:22.342 ************ 2025-05-25 00:43:59.263459 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:59.263628 | orchestrator | 2025-05-25 00:43:59.263902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:43:59.265463 | orchestrator | Sunday 25 May 2025 00:43:59 +0000 (0:00:00.186) 0:00:22.529 ************ 2025-05-25 00:43:59.442504 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:59.442605 | orchestrator | 2025-05-25 00:43:59.442941 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-25 00:43:59.443784 | orchestrator | Sunday 25 May 2025 00:43:59 +0000 (0:00:00.180) 0:00:22.709 ************ 2025-05-25 00:43:59.606814 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-25 00:43:59.611918 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-25 00:43:59.612517 | orchestrator | 2025-05-25 00:43:59.613101 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-25 00:43:59.613832 | orchestrator | Sunday 25 May 2025 00:43:59 +0000 (0:00:00.165) 0:00:22.874 ************ 2025-05-25 00:43:59.726837 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:59.727064 | orchestrator | 2025-05-25 00:43:59.727732 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-25 00:43:59.728198 | orchestrator | Sunday 25 May 2025 00:43:59 +0000 (0:00:00.119) 0:00:22.994 ************ 2025-05-25 00:43:59.852934 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:59.854399 | orchestrator | 2025-05-25 00:43:59.854650 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-25 00:43:59.855098 | orchestrator | Sunday 25 May 2025 00:43:59 +0000 (0:00:00.124) 0:00:23.119 ************ 2025-05-25 00:43:59.986612 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:43:59.986684 | orchestrator | 2025-05-25 00:43:59.987483 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-25 00:43:59.987502 | orchestrator | Sunday 25 May 2025 00:43:59 +0000 (0:00:00.132) 0:00:23.251 ************ 2025-05-25 00:44:00.109767 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:44:00.110827 | orchestrator | 2025-05-25 00:44:00.112004 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-25 00:44:00.113248 | orchestrator | Sunday 25 May 2025 00:44:00 +0000 (0:00:00.124) 0:00:23.376 ************ 2025-05-25 00:44:00.269617 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '86509461-9ff7-5f8d-a545-2dedda0a1471'}}) 2025-05-25 00:44:00.269887 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1f6e0dcd-8614-5501-94b8-6b816e10f3a3'}}) 2025-05-25 00:44:00.271532 | orchestrator | 2025-05-25 00:44:00.272718 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-25 00:44:00.273799 | orchestrator | Sunday 25 May 2025 00:44:00 +0000 (0:00:00.159) 0:00:23.536 ************ 2025-05-25 00:44:00.413234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '86509461-9ff7-5f8d-a545-2dedda0a1471'}})  2025-05-25 00:44:00.415933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1f6e0dcd-8614-5501-94b8-6b816e10f3a3'}})  2025-05-25 00:44:00.415976 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:44:00.416226 | orchestrator | 2025-05-25 00:44:00.417493 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-25 00:44:00.418764 | orchestrator | Sunday 25 May 2025 00:44:00 +0000 (0:00:00.144) 0:00:23.680 ************ 2025-05-25 00:44:00.542752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '86509461-9ff7-5f8d-a545-2dedda0a1471'}})  2025-05-25 00:44:00.542850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1f6e0dcd-8614-5501-94b8-6b816e10f3a3'}})  2025-05-25 00:44:00.543322 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:44:00.545230 | orchestrator | 2025-05-25 00:44:00.547385 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-25 00:44:00.548518 | orchestrator | Sunday 25 May 2025 00:44:00 +0000 (0:00:00.127) 0:00:23.808 ************ 2025-05-25 00:44:00.812936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '86509461-9ff7-5f8d-a545-2dedda0a1471'}})  2025-05-25 00:44:00.813484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1f6e0dcd-8614-5501-94b8-6b816e10f3a3'}})  2025-05-25 00:44:00.814591 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:44:00.818657 | orchestrator | 2025-05-25 00:44:00.818815 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-25 00:44:00.819219 | orchestrator | Sunday 25 May 2025 00:44:00 +0000 (0:00:00.272) 0:00:24.080 ************ 2025-05-25 00:44:00.943476 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:44:00.944481 | orchestrator | 2025-05-25 00:44:00.946567 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-25 00:44:00.948595 | orchestrator | Sunday 25 May 2025 00:44:00 +0000 (0:00:00.130) 0:00:24.210 ************ 2025-05-25 00:44:01.072984 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:44:01.073078 | orchestrator | 2025-05-25 00:44:01.073156 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-25 00:44:01.073943 | orchestrator | Sunday 25 May 2025 00:44:01 +0000 (0:00:00.125) 0:00:24.336 ************ 2025-05-25 00:44:01.197126 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:44:01.198105 | orchestrator | 2025-05-25 00:44:01.199747 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-25 00:44:01.202766 | orchestrator | Sunday 25 May 2025 00:44:01 +0000 (0:00:00.126) 0:00:24.463 ************ 2025-05-25 00:44:01.321665 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:44:01.322583 | orchestrator | 2025-05-25 00:44:01.324161 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-25 00:44:01.325055 | orchestrator | Sunday 25 May 2025 00:44:01 +0000 (0:00:00.125) 0:00:24.588 ************ 2025-05-25 00:44:01.440531 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:44:01.441672 | orchestrator | 2025-05-25 00:44:01.443282 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-25 00:44:01.443308 | orchestrator | Sunday 25 May 2025 00:44:01 +0000 (0:00:00.119) 0:00:24.708 ************ 2025-05-25 00:44:01.561281 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 00:44:01.561905 | orchestrator |  "ceph_osd_devices": { 2025-05-25 00:44:01.562759 | orchestrator |  "sdb": { 2025-05-25 00:44:01.563869 | orchestrator |  "osd_lvm_uuid": "86509461-9ff7-5f8d-a545-2dedda0a1471" 2025-05-25 00:44:01.564793 | orchestrator |  }, 2025-05-25 00:44:01.565619 | orchestrator |  "sdc": { 2025-05-25 00:44:01.566582 | orchestrator |  "osd_lvm_uuid": "1f6e0dcd-8614-5501-94b8-6b816e10f3a3" 2025-05-25 00:44:01.567116 | orchestrator |  } 2025-05-25 00:44:01.568128 | orchestrator |  } 2025-05-25 00:44:01.568725 | orchestrator | } 2025-05-25 00:44:01.569948 | orchestrator | 2025-05-25 00:44:01.570205 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-25 00:44:01.571062 | orchestrator | Sunday 25 May 2025 00:44:01 +0000 (0:00:00.119) 0:00:24.827 ************ 2025-05-25 00:44:01.688146 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:44:01.689595 | orchestrator | 2025-05-25 00:44:01.690061 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-25 00:44:01.690463 | orchestrator | Sunday 25 May 2025 00:44:01 +0000 (0:00:00.127) 0:00:24.955 ************ 2025-05-25 00:44:01.818506 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:44:01.818947 | orchestrator | 2025-05-25 00:44:01.820972 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-25 00:44:01.821562 | orchestrator | Sunday 25 May 2025 00:44:01 +0000 (0:00:00.129) 0:00:25.085 ************ 2025-05-25 00:44:01.944059 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:44:01.945346 | orchestrator | 2025-05-25 00:44:01.948674 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-25 00:44:01.949836 | orchestrator | Sunday 25 May 2025 00:44:01 +0000 (0:00:00.126) 0:00:25.211 ************ 2025-05-25 00:44:02.199101 | orchestrator | changed: [testbed-node-4] => { 2025-05-25 00:44:02.200523 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-25 00:44:02.201401 | orchestrator |  "ceph_osd_devices": { 2025-05-25 00:44:02.202375 | orchestrator |  "sdb": { 2025-05-25 00:44:02.203270 | orchestrator |  "osd_lvm_uuid": "86509461-9ff7-5f8d-a545-2dedda0a1471" 2025-05-25 00:44:02.205441 | orchestrator |  }, 2025-05-25 00:44:02.205938 | orchestrator |  "sdc": { 2025-05-25 00:44:02.207676 | orchestrator |  "osd_lvm_uuid": "1f6e0dcd-8614-5501-94b8-6b816e10f3a3" 2025-05-25 00:44:02.211152 | orchestrator |  } 2025-05-25 00:44:02.211882 | orchestrator |  }, 2025-05-25 00:44:02.212846 | orchestrator |  "lvm_volumes": [ 2025-05-25 00:44:02.213423 | orchestrator |  { 2025-05-25 00:44:02.214414 | orchestrator |  "data": "osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471", 2025-05-25 00:44:02.215866 | orchestrator |  "data_vg": "ceph-86509461-9ff7-5f8d-a545-2dedda0a1471" 2025-05-25 00:44:02.216439 | orchestrator |  }, 2025-05-25 00:44:02.216835 | orchestrator |  { 2025-05-25 00:44:02.219044 | orchestrator |  "data": "osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3", 2025-05-25 00:44:02.219231 | orchestrator |  "data_vg": "ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3" 2025-05-25 00:44:02.219730 | orchestrator |  } 2025-05-25 00:44:02.220972 | orchestrator |  ] 2025-05-25 00:44:02.221442 | orchestrator |  } 2025-05-25 00:44:02.222231 | orchestrator | } 2025-05-25 00:44:02.222425 | orchestrator | 2025-05-25 00:44:02.222847 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-25 00:44:02.224224 | orchestrator | Sunday 25 May 2025 00:44:02 +0000 (0:00:00.250) 0:00:25.461 ************ 2025-05-25 00:44:03.773790 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-25 00:44:03.775140 | orchestrator | 2025-05-25 00:44:03.777450 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-25 00:44:03.778802 | orchestrator | 2025-05-25 00:44:03.779891 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-25 00:44:03.780707 | orchestrator | Sunday 25 May 2025 00:44:03 +0000 (0:00:01.577) 0:00:27.039 ************ 2025-05-25 00:44:04.013539 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-25 00:44:04.013631 | orchestrator | 2025-05-25 00:44:04.013699 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-25 00:44:04.013893 | orchestrator | Sunday 25 May 2025 00:44:04 +0000 (0:00:00.241) 0:00:27.281 ************ 2025-05-25 00:44:04.259431 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:44:04.260297 | orchestrator | 2025-05-25 00:44:04.261163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:04.263137 | orchestrator | Sunday 25 May 2025 00:44:04 +0000 (0:00:00.242) 0:00:27.524 ************ 2025-05-25 00:44:04.773044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-25 00:44:04.774139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-25 00:44:04.774279 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-25 00:44:04.775671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-25 00:44:04.776139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-25 00:44:04.777435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-25 00:44:04.777455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-25 00:44:04.777757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-25 00:44:04.778318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-25 00:44:04.779135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-25 00:44:04.780092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-25 00:44:04.780737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-25 00:44:04.781163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-25 00:44:04.781724 | orchestrator | 2025-05-25 00:44:04.782181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:04.782704 | orchestrator | Sunday 25 May 2025 00:44:04 +0000 (0:00:00.515) 0:00:28.039 ************ 2025-05-25 00:44:04.976995 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:04.977884 | orchestrator | 2025-05-25 00:44:04.978616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:04.979527 | orchestrator | Sunday 25 May 2025 00:44:04 +0000 (0:00:00.203) 0:00:28.243 ************ 2025-05-25 00:44:05.181453 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:05.182095 | orchestrator | 2025-05-25 00:44:05.183712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:05.184192 | orchestrator | Sunday 25 May 2025 00:44:05 +0000 (0:00:00.200) 0:00:28.444 ************ 2025-05-25 00:44:05.375737 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:05.376596 | orchestrator | 2025-05-25 00:44:05.379396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:05.379429 | orchestrator | Sunday 25 May 2025 00:44:05 +0000 (0:00:00.197) 0:00:28.641 ************ 2025-05-25 00:44:05.606873 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:05.607327 | orchestrator | 2025-05-25 00:44:05.608427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:05.610168 | orchestrator | Sunday 25 May 2025 00:44:05 +0000 (0:00:00.230) 0:00:28.872 ************ 2025-05-25 00:44:05.802294 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:05.803459 | orchestrator | 2025-05-25 00:44:05.804469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:05.806630 | orchestrator | Sunday 25 May 2025 00:44:05 +0000 (0:00:00.196) 0:00:29.068 ************ 2025-05-25 00:44:06.009297 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:06.009496 | orchestrator | 2025-05-25 00:44:06.010132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:06.011181 | orchestrator | Sunday 25 May 2025 00:44:06 +0000 (0:00:00.207) 0:00:29.275 ************ 2025-05-25 00:44:06.205980 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:06.206493 | orchestrator | 2025-05-25 00:44:06.207824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:06.208637 | orchestrator | Sunday 25 May 2025 00:44:06 +0000 (0:00:00.195) 0:00:29.471 ************ 2025-05-25 00:44:06.407176 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:06.407436 | orchestrator | 2025-05-25 00:44:06.408116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:06.409182 | orchestrator | Sunday 25 May 2025 00:44:06 +0000 (0:00:00.202) 0:00:29.674 ************ 2025-05-25 00:44:07.010242 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1) 2025-05-25 00:44:07.010500 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1) 2025-05-25 00:44:07.011620 | orchestrator | 2025-05-25 00:44:07.012226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:07.012795 | orchestrator | Sunday 25 May 2025 00:44:07 +0000 (0:00:00.602) 0:00:30.276 ************ 2025-05-25 00:44:07.728507 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5104b556-d7c3-42e9-9230-39ae2abd74e9) 2025-05-25 00:44:07.728656 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5104b556-d7c3-42e9-9230-39ae2abd74e9) 2025-05-25 00:44:07.728946 | orchestrator | 2025-05-25 00:44:07.729559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:07.730122 | orchestrator | Sunday 25 May 2025 00:44:07 +0000 (0:00:00.716) 0:00:30.993 ************ 2025-05-25 00:44:08.393854 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a4234bd8-7c33-4d3a-bb78-5919196abab5) 2025-05-25 00:44:08.394010 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a4234bd8-7c33-4d3a-bb78-5919196abab5) 2025-05-25 00:44:08.394153 | orchestrator | 2025-05-25 00:44:08.394788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:08.395240 | orchestrator | Sunday 25 May 2025 00:44:08 +0000 (0:00:00.666) 0:00:31.659 ************ 2025-05-25 00:44:08.827065 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_70c7a39a-01cf-4431-b65e-7bc8a8e29825) 2025-05-25 00:44:08.827156 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_70c7a39a-01cf-4431-b65e-7bc8a8e29825) 2025-05-25 00:44:08.828207 | orchestrator | 2025-05-25 00:44:08.830776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:44:08.831116 | orchestrator | Sunday 25 May 2025 00:44:08 +0000 (0:00:00.432) 0:00:32.092 ************ 2025-05-25 00:44:09.168983 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-25 00:44:09.169079 | orchestrator | 2025-05-25 00:44:09.169257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:09.169523 | orchestrator | Sunday 25 May 2025 00:44:09 +0000 (0:00:00.337) 0:00:32.429 ************ 2025-05-25 00:44:09.567078 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-25 00:44:09.568558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-25 00:44:09.568769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-25 00:44:09.569258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-25 00:44:09.569822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-25 00:44:09.570172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-25 00:44:09.570714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-25 00:44:09.573107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-25 00:44:09.573131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-25 00:44:09.573171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-25 00:44:09.573184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-25 00:44:09.573196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-25 00:44:09.573413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-25 00:44:09.574658 | orchestrator | 2025-05-25 00:44:09.575212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:09.575683 | orchestrator | Sunday 25 May 2025 00:44:09 +0000 (0:00:00.404) 0:00:32.834 ************ 2025-05-25 00:44:09.767063 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:09.767219 | orchestrator | 2025-05-25 00:44:09.768399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:09.768892 | orchestrator | Sunday 25 May 2025 00:44:09 +0000 (0:00:00.199) 0:00:33.034 ************ 2025-05-25 00:44:09.970445 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:09.970652 | orchestrator | 2025-05-25 00:44:09.971860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:09.972845 | orchestrator | Sunday 25 May 2025 00:44:09 +0000 (0:00:00.203) 0:00:33.237 ************ 2025-05-25 00:44:10.182455 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:10.183531 | orchestrator | 2025-05-25 00:44:10.183873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:10.184431 | orchestrator | Sunday 25 May 2025 00:44:10 +0000 (0:00:00.210) 0:00:33.448 ************ 2025-05-25 00:44:10.386687 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:10.387229 | orchestrator | 2025-05-25 00:44:10.388140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:10.389028 | orchestrator | Sunday 25 May 2025 00:44:10 +0000 (0:00:00.205) 0:00:33.653 ************ 2025-05-25 00:44:10.610679 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:10.611050 | orchestrator | 2025-05-25 00:44:10.612322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:10.612941 | orchestrator | Sunday 25 May 2025 00:44:10 +0000 (0:00:00.223) 0:00:33.876 ************ 2025-05-25 00:44:11.213731 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:11.214544 | orchestrator | 2025-05-25 00:44:11.215089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:11.216260 | orchestrator | Sunday 25 May 2025 00:44:11 +0000 (0:00:00.601) 0:00:34.478 ************ 2025-05-25 00:44:11.413100 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:11.413605 | orchestrator | 2025-05-25 00:44:11.414505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:11.415543 | orchestrator | Sunday 25 May 2025 00:44:11 +0000 (0:00:00.198) 0:00:34.676 ************ 2025-05-25 00:44:11.627276 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:11.627452 | orchestrator | 2025-05-25 00:44:11.627480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:11.629044 | orchestrator | Sunday 25 May 2025 00:44:11 +0000 (0:00:00.214) 0:00:34.890 ************ 2025-05-25 00:44:12.246744 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-25 00:44:12.247704 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-25 00:44:12.248947 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-25 00:44:12.250000 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-25 00:44:12.250532 | orchestrator | 2025-05-25 00:44:12.251487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:12.252617 | orchestrator | Sunday 25 May 2025 00:44:12 +0000 (0:00:00.621) 0:00:35.512 ************ 2025-05-25 00:44:12.441952 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:12.443006 | orchestrator | 2025-05-25 00:44:12.445888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:12.446405 | orchestrator | Sunday 25 May 2025 00:44:12 +0000 (0:00:00.197) 0:00:35.709 ************ 2025-05-25 00:44:12.635957 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:12.637243 | orchestrator | 2025-05-25 00:44:12.637899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:12.639023 | orchestrator | Sunday 25 May 2025 00:44:12 +0000 (0:00:00.189) 0:00:35.898 ************ 2025-05-25 00:44:12.821060 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:12.821554 | orchestrator | 2025-05-25 00:44:12.822924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:44:12.823329 | orchestrator | Sunday 25 May 2025 00:44:12 +0000 (0:00:00.188) 0:00:36.087 ************ 2025-05-25 00:44:13.010314 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:13.010543 | orchestrator | 2025-05-25 00:44:13.013026 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-25 00:44:13.013059 | orchestrator | Sunday 25 May 2025 00:44:13 +0000 (0:00:00.186) 0:00:36.274 ************ 2025-05-25 00:44:13.205691 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-25 00:44:13.205784 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-25 00:44:13.206639 | orchestrator | 2025-05-25 00:44:13.207793 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-25 00:44:13.208128 | orchestrator | Sunday 25 May 2025 00:44:13 +0000 (0:00:00.196) 0:00:36.471 ************ 2025-05-25 00:44:13.340820 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:13.341215 | orchestrator | 2025-05-25 00:44:13.342472 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-25 00:44:13.343043 | orchestrator | Sunday 25 May 2025 00:44:13 +0000 (0:00:00.135) 0:00:36.607 ************ 2025-05-25 00:44:13.473041 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:13.473682 | orchestrator | 2025-05-25 00:44:13.474508 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-25 00:44:13.475444 | orchestrator | Sunday 25 May 2025 00:44:13 +0000 (0:00:00.132) 0:00:36.739 ************ 2025-05-25 00:44:13.773176 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:13.773807 | orchestrator | 2025-05-25 00:44:13.774799 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-25 00:44:13.775640 | orchestrator | Sunday 25 May 2025 00:44:13 +0000 (0:00:00.299) 0:00:37.039 ************ 2025-05-25 00:44:13.914583 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:44:13.915489 | orchestrator | 2025-05-25 00:44:13.918419 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-25 00:44:13.918459 | orchestrator | Sunday 25 May 2025 00:44:13 +0000 (0:00:00.140) 0:00:37.180 ************ 2025-05-25 00:44:14.091201 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f34e313d-bca1-5ff8-8346-de91d98588f2'}}) 2025-05-25 00:44:14.092325 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a31c7786-f287-566f-81cf-65786b8dbda6'}}) 2025-05-25 00:44:14.093758 | orchestrator | 2025-05-25 00:44:14.095087 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-25 00:44:14.095888 | orchestrator | Sunday 25 May 2025 00:44:14 +0000 (0:00:00.176) 0:00:37.357 ************ 2025-05-25 00:44:14.252768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f34e313d-bca1-5ff8-8346-de91d98588f2'}})  2025-05-25 00:44:14.253320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a31c7786-f287-566f-81cf-65786b8dbda6'}})  2025-05-25 00:44:14.253574 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:14.256642 | orchestrator | 2025-05-25 00:44:14.257696 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-25 00:44:14.258602 | orchestrator | Sunday 25 May 2025 00:44:14 +0000 (0:00:00.160) 0:00:37.518 ************ 2025-05-25 00:44:14.421472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f34e313d-bca1-5ff8-8346-de91d98588f2'}})  2025-05-25 00:44:14.421729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a31c7786-f287-566f-81cf-65786b8dbda6'}})  2025-05-25 00:44:14.422711 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:14.424952 | orchestrator | 2025-05-25 00:44:14.424995 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-25 00:44:14.425201 | orchestrator | Sunday 25 May 2025 00:44:14 +0000 (0:00:00.169) 0:00:37.687 ************ 2025-05-25 00:44:14.586953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f34e313d-bca1-5ff8-8346-de91d98588f2'}})  2025-05-25 00:44:14.588137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a31c7786-f287-566f-81cf-65786b8dbda6'}})  2025-05-25 00:44:14.589519 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:14.590621 | orchestrator | 2025-05-25 00:44:14.591210 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-25 00:44:14.591909 | orchestrator | Sunday 25 May 2025 00:44:14 +0000 (0:00:00.166) 0:00:37.853 ************ 2025-05-25 00:44:14.730919 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:44:14.731793 | orchestrator | 2025-05-25 00:44:14.732706 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-25 00:44:14.733515 | orchestrator | Sunday 25 May 2025 00:44:14 +0000 (0:00:00.144) 0:00:37.997 ************ 2025-05-25 00:44:14.885396 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:44:14.886179 | orchestrator | 2025-05-25 00:44:14.887432 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-25 00:44:14.888445 | orchestrator | Sunday 25 May 2025 00:44:14 +0000 (0:00:00.153) 0:00:38.151 ************ 2025-05-25 00:44:15.027271 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:15.028572 | orchestrator | 2025-05-25 00:44:15.029991 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-25 00:44:15.030528 | orchestrator | Sunday 25 May 2025 00:44:15 +0000 (0:00:00.141) 0:00:38.292 ************ 2025-05-25 00:44:15.159036 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:15.160318 | orchestrator | 2025-05-25 00:44:15.160510 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-25 00:44:15.162125 | orchestrator | Sunday 25 May 2025 00:44:15 +0000 (0:00:00.133) 0:00:38.426 ************ 2025-05-25 00:44:15.283907 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:15.284457 | orchestrator | 2025-05-25 00:44:15.285666 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-25 00:44:15.286705 | orchestrator | Sunday 25 May 2025 00:44:15 +0000 (0:00:00.121) 0:00:38.548 ************ 2025-05-25 00:44:15.434382 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 00:44:15.434458 | orchestrator |  "ceph_osd_devices": { 2025-05-25 00:44:15.435070 | orchestrator |  "sdb": { 2025-05-25 00:44:15.435957 | orchestrator |  "osd_lvm_uuid": "f34e313d-bca1-5ff8-8346-de91d98588f2" 2025-05-25 00:44:15.437263 | orchestrator |  }, 2025-05-25 00:44:15.438441 | orchestrator |  "sdc": { 2025-05-25 00:44:15.439201 | orchestrator |  "osd_lvm_uuid": "a31c7786-f287-566f-81cf-65786b8dbda6" 2025-05-25 00:44:15.439868 | orchestrator |  } 2025-05-25 00:44:15.440327 | orchestrator |  } 2025-05-25 00:44:15.440980 | orchestrator | } 2025-05-25 00:44:15.441384 | orchestrator | 2025-05-25 00:44:15.441913 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-25 00:44:15.442494 | orchestrator | Sunday 25 May 2025 00:44:15 +0000 (0:00:00.151) 0:00:38.699 ************ 2025-05-25 00:44:15.744055 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:15.744917 | orchestrator | 2025-05-25 00:44:15.745904 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-25 00:44:15.746138 | orchestrator | Sunday 25 May 2025 00:44:15 +0000 (0:00:00.310) 0:00:39.010 ************ 2025-05-25 00:44:15.881833 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:15.882170 | orchestrator | 2025-05-25 00:44:15.883276 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-25 00:44:15.884114 | orchestrator | Sunday 25 May 2025 00:44:15 +0000 (0:00:00.138) 0:00:39.149 ************ 2025-05-25 00:44:16.016499 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:44:16.017443 | orchestrator | 2025-05-25 00:44:16.019042 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-25 00:44:16.021606 | orchestrator | Sunday 25 May 2025 00:44:16 +0000 (0:00:00.133) 0:00:39.282 ************ 2025-05-25 00:44:16.285269 | orchestrator | changed: [testbed-node-5] => { 2025-05-25 00:44:16.285749 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-25 00:44:16.288634 | orchestrator |  "ceph_osd_devices": { 2025-05-25 00:44:16.289973 | orchestrator |  "sdb": { 2025-05-25 00:44:16.291117 | orchestrator |  "osd_lvm_uuid": "f34e313d-bca1-5ff8-8346-de91d98588f2" 2025-05-25 00:44:16.292155 | orchestrator |  }, 2025-05-25 00:44:16.293241 | orchestrator |  "sdc": { 2025-05-25 00:44:16.293796 | orchestrator |  "osd_lvm_uuid": "a31c7786-f287-566f-81cf-65786b8dbda6" 2025-05-25 00:44:16.294488 | orchestrator |  } 2025-05-25 00:44:16.295303 | orchestrator |  }, 2025-05-25 00:44:16.296342 | orchestrator |  "lvm_volumes": [ 2025-05-25 00:44:16.296731 | orchestrator |  { 2025-05-25 00:44:16.297195 | orchestrator |  "data": "osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2", 2025-05-25 00:44:16.297743 | orchestrator |  "data_vg": "ceph-f34e313d-bca1-5ff8-8346-de91d98588f2" 2025-05-25 00:44:16.299568 | orchestrator |  }, 2025-05-25 00:44:16.300227 | orchestrator |  { 2025-05-25 00:44:16.300612 | orchestrator |  "data": "osd-block-a31c7786-f287-566f-81cf-65786b8dbda6", 2025-05-25 00:44:16.300903 | orchestrator |  "data_vg": "ceph-a31c7786-f287-566f-81cf-65786b8dbda6" 2025-05-25 00:44:16.302137 | orchestrator |  } 2025-05-25 00:44:16.302576 | orchestrator |  ] 2025-05-25 00:44:16.303445 | orchestrator |  } 2025-05-25 00:44:16.303649 | orchestrator | } 2025-05-25 00:44:16.304280 | orchestrator | 2025-05-25 00:44:16.304599 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-25 00:44:16.304830 | orchestrator | Sunday 25 May 2025 00:44:16 +0000 (0:00:00.267) 0:00:39.550 ************ 2025-05-25 00:44:17.350100 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-25 00:44:17.350410 | orchestrator | 2025-05-25 00:44:17.351284 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:44:17.352901 | orchestrator | 2025-05-25 00:44:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:44:17.352977 | orchestrator | 2025-05-25 00:44:17 | INFO  | Please wait and do not abort execution. 2025-05-25 00:44:17.353429 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-25 00:44:17.354707 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-25 00:44:17.355218 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-25 00:44:17.355915 | orchestrator | 2025-05-25 00:44:17.356890 | orchestrator | 2025-05-25 00:44:17.358713 | orchestrator | 2025-05-25 00:44:17.358815 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:44:17.359249 | orchestrator | Sunday 25 May 2025 00:44:17 +0000 (0:00:01.064) 0:00:40.615 ************ 2025-05-25 00:44:17.359783 | orchestrator | =============================================================================== 2025-05-25 00:44:17.360613 | orchestrator | Write configuration file ------------------------------------------------ 4.48s 2025-05-25 00:44:17.361260 | orchestrator | Add known links to the list of available block devices ------------------ 1.46s 2025-05-25 00:44:17.361696 | orchestrator | Add known partitions to the list of available block devices ------------- 1.34s 2025-05-25 00:44:17.362109 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-05-25 00:44:17.362742 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2025-05-25 00:44:17.363571 | orchestrator | Print configuration data ------------------------------------------------ 0.76s 2025-05-25 00:44:17.364207 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-05-25 00:44:17.365278 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-05-25 00:44:17.365405 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2025-05-25 00:44:17.365672 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-05-25 00:44:17.365911 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-05-25 00:44:17.366278 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-05-25 00:44:17.366880 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-05-25 00:44:17.367016 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-05-25 00:44:17.367450 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-05-25 00:44:17.367753 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.59s 2025-05-25 00:44:17.368333 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.57s 2025-05-25 00:44:17.368827 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.57s 2025-05-25 00:44:17.369103 | orchestrator | Print WAL devices ------------------------------------------------------- 0.57s 2025-05-25 00:44:17.369579 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.56s 2025-05-25 00:44:29.111780 | orchestrator | 2025-05-25 00:44:29 | INFO  | Task 905760ee-5080-4cbe-bd3b-0ce81056857b is running in background. Output coming soon. 2025-05-25 00:45:02.717521 | orchestrator | 2025-05-25 00:44:54 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-25 00:45:02.717642 | orchestrator | 2025-05-25 00:44:54 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-25 00:45:02.717672 | orchestrator | 2025-05-25 00:44:55 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-25 00:45:02.717686 | orchestrator | 2025-05-25 00:44:55 | INFO  | Handling group overwrites in 99-overwrite 2025-05-25 00:45:02.717698 | orchestrator | 2025-05-25 00:44:55 | INFO  | Removing group frr:children from 60-generic 2025-05-25 00:45:02.717709 | orchestrator | 2025-05-25 00:44:55 | INFO  | Removing group storage:children from 50-kolla 2025-05-25 00:45:02.717721 | orchestrator | 2025-05-25 00:44:55 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-25 00:45:02.717732 | orchestrator | 2025-05-25 00:44:55 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-25 00:45:02.717744 | orchestrator | 2025-05-25 00:44:55 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-25 00:45:02.717755 | orchestrator | 2025-05-25 00:44:55 | INFO  | Handling group overwrites in 20-roles 2025-05-25 00:45:02.717766 | orchestrator | 2025-05-25 00:44:55 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-25 00:45:02.717777 | orchestrator | 2025-05-25 00:44:55 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-25 00:45:02.717788 | orchestrator | 2025-05-25 00:45:02 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-25 00:45:04.130302 | orchestrator | 2025-05-25 00:45:04 | INFO  | Task 2a08c9f7-8d98-4a72-a619-8922085eb344 (ceph-create-lvm-devices) was prepared for execution. 2025-05-25 00:45:04.130445 | orchestrator | 2025-05-25 00:45:04 | INFO  | It takes a moment until task 2a08c9f7-8d98-4a72-a619-8922085eb344 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-25 00:45:06.884526 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-25 00:45:07.401614 | orchestrator | 2025-05-25 00:45:07.401926 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-25 00:45:07.402598 | orchestrator | 2025-05-25 00:45:07.402910 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-25 00:45:07.403441 | orchestrator | Sunday 25 May 2025 00:45:07 +0000 (0:00:00.452) 0:00:00.452 ************ 2025-05-25 00:45:07.661204 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-25 00:45:07.661430 | orchestrator | 2025-05-25 00:45:07.661473 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-25 00:45:07.662583 | orchestrator | Sunday 25 May 2025 00:45:07 +0000 (0:00:00.255) 0:00:00.708 ************ 2025-05-25 00:45:07.880717 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:45:07.880934 | orchestrator | 2025-05-25 00:45:07.881107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:07.881491 | orchestrator | Sunday 25 May 2025 00:45:07 +0000 (0:00:00.225) 0:00:00.933 ************ 2025-05-25 00:45:08.575308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-25 00:45:08.575602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-25 00:45:08.577855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-25 00:45:08.577946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-25 00:45:08.578269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-25 00:45:08.579646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-25 00:45:08.580635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-25 00:45:08.583700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-25 00:45:08.583769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-25 00:45:08.583791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-25 00:45:08.583810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-25 00:45:08.583829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-25 00:45:08.583846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-25 00:45:08.583858 | orchestrator | 2025-05-25 00:45:08.583920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:08.584541 | orchestrator | Sunday 25 May 2025 00:45:08 +0000 (0:00:00.694) 0:00:01.628 ************ 2025-05-25 00:45:08.759205 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:08.759928 | orchestrator | 2025-05-25 00:45:08.760667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:08.761630 | orchestrator | Sunday 25 May 2025 00:45:08 +0000 (0:00:00.183) 0:00:01.812 ************ 2025-05-25 00:45:08.949007 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:08.949464 | orchestrator | 2025-05-25 00:45:08.950164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:08.950923 | orchestrator | Sunday 25 May 2025 00:45:08 +0000 (0:00:00.188) 0:00:02.000 ************ 2025-05-25 00:45:09.133482 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:09.133603 | orchestrator | 2025-05-25 00:45:09.133708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:09.134827 | orchestrator | Sunday 25 May 2025 00:45:09 +0000 (0:00:00.184) 0:00:02.185 ************ 2025-05-25 00:45:09.339783 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:09.340630 | orchestrator | 2025-05-25 00:45:09.341638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:09.342088 | orchestrator | Sunday 25 May 2025 00:45:09 +0000 (0:00:00.205) 0:00:02.391 ************ 2025-05-25 00:45:09.555803 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:09.555900 | orchestrator | 2025-05-25 00:45:09.560695 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:09.560852 | orchestrator | Sunday 25 May 2025 00:45:09 +0000 (0:00:00.215) 0:00:02.607 ************ 2025-05-25 00:45:09.744594 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:09.745178 | orchestrator | 2025-05-25 00:45:09.745930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:09.746532 | orchestrator | Sunday 25 May 2025 00:45:09 +0000 (0:00:00.189) 0:00:02.797 ************ 2025-05-25 00:45:09.933257 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:09.934205 | orchestrator | 2025-05-25 00:45:09.935042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:09.936816 | orchestrator | Sunday 25 May 2025 00:45:09 +0000 (0:00:00.188) 0:00:02.985 ************ 2025-05-25 00:45:10.133775 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:10.134526 | orchestrator | 2025-05-25 00:45:10.135093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:10.135680 | orchestrator | Sunday 25 May 2025 00:45:10 +0000 (0:00:00.200) 0:00:03.186 ************ 2025-05-25 00:45:10.722211 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b) 2025-05-25 00:45:10.722662 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b) 2025-05-25 00:45:10.723784 | orchestrator | 2025-05-25 00:45:10.724439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:10.726298 | orchestrator | Sunday 25 May 2025 00:45:10 +0000 (0:00:00.588) 0:00:03.774 ************ 2025-05-25 00:45:11.504037 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b4cdb2bf-93fc-4f18-bc4f-5ab68c384bd6) 2025-05-25 00:45:11.504142 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b4cdb2bf-93fc-4f18-bc4f-5ab68c384bd6) 2025-05-25 00:45:11.504158 | orchestrator | 2025-05-25 00:45:11.504538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:11.505052 | orchestrator | Sunday 25 May 2025 00:45:11 +0000 (0:00:00.776) 0:00:04.551 ************ 2025-05-25 00:45:11.924147 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5d6b2858-a2bf-4730-a36e-7c509d6038b8) 2025-05-25 00:45:11.924422 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5d6b2858-a2bf-4730-a36e-7c509d6038b8) 2025-05-25 00:45:11.925235 | orchestrator | 2025-05-25 00:45:11.926135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:11.926664 | orchestrator | Sunday 25 May 2025 00:45:11 +0000 (0:00:00.424) 0:00:04.976 ************ 2025-05-25 00:45:12.342427 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f90c35ea-44f5-4677-8ded-e7e6ddf8d55d) 2025-05-25 00:45:12.342924 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f90c35ea-44f5-4677-8ded-e7e6ddf8d55d) 2025-05-25 00:45:12.344375 | orchestrator | 2025-05-25 00:45:12.345040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:12.345700 | orchestrator | Sunday 25 May 2025 00:45:12 +0000 (0:00:00.416) 0:00:05.392 ************ 2025-05-25 00:45:12.687751 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-25 00:45:12.688794 | orchestrator | 2025-05-25 00:45:12.689803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:12.692025 | orchestrator | Sunday 25 May 2025 00:45:12 +0000 (0:00:00.347) 0:00:05.740 ************ 2025-05-25 00:45:13.152501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-25 00:45:13.155387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-25 00:45:13.155431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-25 00:45:13.155488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-25 00:45:13.156262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-25 00:45:13.156796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-25 00:45:13.157300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-25 00:45:13.157996 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-25 00:45:13.158604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-25 00:45:13.158894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-25 00:45:13.159193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-25 00:45:13.159648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-25 00:45:13.159875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-25 00:45:13.160171 | orchestrator | 2025-05-25 00:45:13.160745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:13.160948 | orchestrator | Sunday 25 May 2025 00:45:13 +0000 (0:00:00.463) 0:00:06.203 ************ 2025-05-25 00:45:13.365614 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:13.366696 | orchestrator | 2025-05-25 00:45:13.367828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:13.370067 | orchestrator | Sunday 25 May 2025 00:45:13 +0000 (0:00:00.214) 0:00:06.418 ************ 2025-05-25 00:45:13.555930 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:13.556495 | orchestrator | 2025-05-25 00:45:13.556883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:13.557720 | orchestrator | Sunday 25 May 2025 00:45:13 +0000 (0:00:00.190) 0:00:06.608 ************ 2025-05-25 00:45:13.756145 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:13.756290 | orchestrator | 2025-05-25 00:45:13.756775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:13.757396 | orchestrator | Sunday 25 May 2025 00:45:13 +0000 (0:00:00.200) 0:00:06.809 ************ 2025-05-25 00:45:13.946915 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:13.947301 | orchestrator | 2025-05-25 00:45:13.948674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:13.949793 | orchestrator | Sunday 25 May 2025 00:45:13 +0000 (0:00:00.189) 0:00:06.999 ************ 2025-05-25 00:45:14.474940 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:14.477165 | orchestrator | 2025-05-25 00:45:14.480233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:14.483863 | orchestrator | Sunday 25 May 2025 00:45:14 +0000 (0:00:00.523) 0:00:07.522 ************ 2025-05-25 00:45:14.688574 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:14.688673 | orchestrator | 2025-05-25 00:45:14.689110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:14.690391 | orchestrator | Sunday 25 May 2025 00:45:14 +0000 (0:00:00.219) 0:00:07.741 ************ 2025-05-25 00:45:14.885302 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:14.885452 | orchestrator | 2025-05-25 00:45:14.885470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:14.886468 | orchestrator | Sunday 25 May 2025 00:45:14 +0000 (0:00:00.196) 0:00:07.938 ************ 2025-05-25 00:45:15.098955 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:15.102464 | orchestrator | 2025-05-25 00:45:15.102507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:15.102522 | orchestrator | Sunday 25 May 2025 00:45:15 +0000 (0:00:00.212) 0:00:08.150 ************ 2025-05-25 00:45:15.755975 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-25 00:45:15.756139 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-25 00:45:15.756556 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-25 00:45:15.757041 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-25 00:45:15.757611 | orchestrator | 2025-05-25 00:45:15.758142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:15.758827 | orchestrator | Sunday 25 May 2025 00:45:15 +0000 (0:00:00.658) 0:00:08.809 ************ 2025-05-25 00:45:15.964900 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:15.965278 | orchestrator | 2025-05-25 00:45:15.966444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:15.968628 | orchestrator | Sunday 25 May 2025 00:45:15 +0000 (0:00:00.207) 0:00:09.016 ************ 2025-05-25 00:45:16.154319 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:16.154605 | orchestrator | 2025-05-25 00:45:16.155936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:16.158233 | orchestrator | Sunday 25 May 2025 00:45:16 +0000 (0:00:00.188) 0:00:09.205 ************ 2025-05-25 00:45:16.356042 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:16.356309 | orchestrator | 2025-05-25 00:45:16.357360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:16.357943 | orchestrator | Sunday 25 May 2025 00:45:16 +0000 (0:00:00.202) 0:00:09.408 ************ 2025-05-25 00:45:16.561466 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:16.564054 | orchestrator | 2025-05-25 00:45:16.564113 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-25 00:45:16.564274 | orchestrator | Sunday 25 May 2025 00:45:16 +0000 (0:00:00.205) 0:00:09.613 ************ 2025-05-25 00:45:16.693546 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:16.694068 | orchestrator | 2025-05-25 00:45:16.695032 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-25 00:45:16.697323 | orchestrator | Sunday 25 May 2025 00:45:16 +0000 (0:00:00.132) 0:00:09.745 ************ 2025-05-25 00:45:16.889761 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '91dc6ac0-e554-5716-a575-6858f2de7d62'}}) 2025-05-25 00:45:16.889858 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'}}) 2025-05-25 00:45:16.890673 | orchestrator | 2025-05-25 00:45:16.893147 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-25 00:45:16.894132 | orchestrator | Sunday 25 May 2025 00:45:16 +0000 (0:00:00.194) 0:00:09.940 ************ 2025-05-25 00:45:19.102210 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'}) 2025-05-25 00:45:19.102319 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'}) 2025-05-25 00:45:19.102542 | orchestrator | 2025-05-25 00:45:19.102865 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-25 00:45:19.103506 | orchestrator | Sunday 25 May 2025 00:45:19 +0000 (0:00:02.213) 0:00:12.154 ************ 2025-05-25 00:45:19.260719 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:19.260911 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:19.262115 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:19.262864 | orchestrator | 2025-05-25 00:45:19.263574 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-25 00:45:19.265754 | orchestrator | Sunday 25 May 2025 00:45:19 +0000 (0:00:00.158) 0:00:12.313 ************ 2025-05-25 00:45:20.670299 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'}) 2025-05-25 00:45:20.671085 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'}) 2025-05-25 00:45:20.671726 | orchestrator | 2025-05-25 00:45:20.672609 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-25 00:45:20.672828 | orchestrator | Sunday 25 May 2025 00:45:20 +0000 (0:00:01.406) 0:00:13.719 ************ 2025-05-25 00:45:20.830900 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:20.831539 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:20.832273 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:20.835588 | orchestrator | 2025-05-25 00:45:20.835679 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-25 00:45:20.835696 | orchestrator | Sunday 25 May 2025 00:45:20 +0000 (0:00:00.163) 0:00:13.883 ************ 2025-05-25 00:45:20.978286 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:20.978429 | orchestrator | 2025-05-25 00:45:20.978825 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-25 00:45:20.979382 | orchestrator | Sunday 25 May 2025 00:45:20 +0000 (0:00:00.146) 0:00:14.030 ************ 2025-05-25 00:45:21.136523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:21.136619 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:21.136849 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:21.137307 | orchestrator | 2025-05-25 00:45:21.137815 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-25 00:45:21.138180 | orchestrator | Sunday 25 May 2025 00:45:21 +0000 (0:00:00.158) 0:00:14.189 ************ 2025-05-25 00:45:21.298430 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:21.298631 | orchestrator | 2025-05-25 00:45:21.299532 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-25 00:45:21.302126 | orchestrator | Sunday 25 May 2025 00:45:21 +0000 (0:00:00.160) 0:00:14.350 ************ 2025-05-25 00:45:21.475656 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:21.475757 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:21.476641 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:21.479290 | orchestrator | 2025-05-25 00:45:21.480374 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-25 00:45:21.480858 | orchestrator | Sunday 25 May 2025 00:45:21 +0000 (0:00:00.176) 0:00:14.526 ************ 2025-05-25 00:45:21.773971 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:21.774614 | orchestrator | 2025-05-25 00:45:21.776895 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-25 00:45:21.777663 | orchestrator | Sunday 25 May 2025 00:45:21 +0000 (0:00:00.298) 0:00:14.825 ************ 2025-05-25 00:45:21.934745 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:21.934943 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:21.935066 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:21.935842 | orchestrator | 2025-05-25 00:45:21.936553 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-25 00:45:21.936587 | orchestrator | Sunday 25 May 2025 00:45:21 +0000 (0:00:00.162) 0:00:14.987 ************ 2025-05-25 00:45:22.077838 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:45:22.077936 | orchestrator | 2025-05-25 00:45:22.078006 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-25 00:45:22.078288 | orchestrator | Sunday 25 May 2025 00:45:22 +0000 (0:00:00.143) 0:00:15.130 ************ 2025-05-25 00:45:22.254105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:22.255315 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:22.257050 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:22.257907 | orchestrator | 2025-05-25 00:45:22.259382 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-25 00:45:22.259843 | orchestrator | Sunday 25 May 2025 00:45:22 +0000 (0:00:00.173) 0:00:15.304 ************ 2025-05-25 00:45:22.422287 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:22.422433 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:22.423103 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:22.424670 | orchestrator | 2025-05-25 00:45:22.425964 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-25 00:45:22.426937 | orchestrator | Sunday 25 May 2025 00:45:22 +0000 (0:00:00.168) 0:00:15.473 ************ 2025-05-25 00:45:22.596600 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:22.596720 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:22.596977 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:22.597127 | orchestrator | 2025-05-25 00:45:22.597780 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-25 00:45:22.598509 | orchestrator | Sunday 25 May 2025 00:45:22 +0000 (0:00:00.175) 0:00:15.649 ************ 2025-05-25 00:45:22.726168 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:22.726368 | orchestrator | 2025-05-25 00:45:22.727657 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-25 00:45:22.728726 | orchestrator | Sunday 25 May 2025 00:45:22 +0000 (0:00:00.129) 0:00:15.778 ************ 2025-05-25 00:45:22.895936 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:22.896459 | orchestrator | 2025-05-25 00:45:22.898213 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-25 00:45:22.899202 | orchestrator | Sunday 25 May 2025 00:45:22 +0000 (0:00:00.170) 0:00:15.948 ************ 2025-05-25 00:45:23.026734 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:23.026827 | orchestrator | 2025-05-25 00:45:23.027783 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-25 00:45:23.028165 | orchestrator | Sunday 25 May 2025 00:45:23 +0000 (0:00:00.131) 0:00:16.079 ************ 2025-05-25 00:45:23.171076 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 00:45:23.171165 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-25 00:45:23.172364 | orchestrator | } 2025-05-25 00:45:23.172925 | orchestrator | 2025-05-25 00:45:23.175725 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-25 00:45:23.176447 | orchestrator | Sunday 25 May 2025 00:45:23 +0000 (0:00:00.143) 0:00:16.223 ************ 2025-05-25 00:45:23.331246 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 00:45:23.331517 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-25 00:45:23.333650 | orchestrator | } 2025-05-25 00:45:23.333682 | orchestrator | 2025-05-25 00:45:23.333696 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-25 00:45:23.333710 | orchestrator | Sunday 25 May 2025 00:45:23 +0000 (0:00:00.158) 0:00:16.381 ************ 2025-05-25 00:45:23.475012 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 00:45:23.475769 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-25 00:45:23.476629 | orchestrator | } 2025-05-25 00:45:23.477744 | orchestrator | 2025-05-25 00:45:23.477880 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-25 00:45:23.479243 | orchestrator | Sunday 25 May 2025 00:45:23 +0000 (0:00:00.145) 0:00:16.527 ************ 2025-05-25 00:45:24.461649 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:45:24.461827 | orchestrator | 2025-05-25 00:45:24.464475 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-25 00:45:24.464504 | orchestrator | Sunday 25 May 2025 00:45:24 +0000 (0:00:00.984) 0:00:17.512 ************ 2025-05-25 00:45:24.975640 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:45:24.975724 | orchestrator | 2025-05-25 00:45:24.976350 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-25 00:45:24.977372 | orchestrator | Sunday 25 May 2025 00:45:24 +0000 (0:00:00.515) 0:00:18.027 ************ 2025-05-25 00:45:25.467066 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:45:25.467234 | orchestrator | 2025-05-25 00:45:25.468275 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-25 00:45:25.468750 | orchestrator | Sunday 25 May 2025 00:45:25 +0000 (0:00:00.490) 0:00:18.518 ************ 2025-05-25 00:45:25.617583 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:45:25.617908 | orchestrator | 2025-05-25 00:45:25.618882 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-25 00:45:25.619421 | orchestrator | Sunday 25 May 2025 00:45:25 +0000 (0:00:00.150) 0:00:18.669 ************ 2025-05-25 00:45:25.727767 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:25.728404 | orchestrator | 2025-05-25 00:45:25.729381 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-25 00:45:25.730100 | orchestrator | Sunday 25 May 2025 00:45:25 +0000 (0:00:00.110) 0:00:18.780 ************ 2025-05-25 00:45:25.843224 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:25.844289 | orchestrator | 2025-05-25 00:45:25.845043 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-25 00:45:25.847662 | orchestrator | Sunday 25 May 2025 00:45:25 +0000 (0:00:00.115) 0:00:18.896 ************ 2025-05-25 00:45:25.984295 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 00:45:25.985930 | orchestrator |  "vgs_report": { 2025-05-25 00:45:25.986919 | orchestrator |  "vg": [] 2025-05-25 00:45:25.987172 | orchestrator |  } 2025-05-25 00:45:25.987816 | orchestrator | } 2025-05-25 00:45:25.988308 | orchestrator | 2025-05-25 00:45:25.988986 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-25 00:45:25.989485 | orchestrator | Sunday 25 May 2025 00:45:25 +0000 (0:00:00.140) 0:00:19.036 ************ 2025-05-25 00:45:26.127869 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:26.128871 | orchestrator | 2025-05-25 00:45:26.130068 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-25 00:45:26.130695 | orchestrator | Sunday 25 May 2025 00:45:26 +0000 (0:00:00.143) 0:00:19.180 ************ 2025-05-25 00:45:26.282368 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:26.282607 | orchestrator | 2025-05-25 00:45:26.283368 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-25 00:45:26.285001 | orchestrator | Sunday 25 May 2025 00:45:26 +0000 (0:00:00.150) 0:00:19.331 ************ 2025-05-25 00:45:26.414667 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:26.416191 | orchestrator | 2025-05-25 00:45:26.416913 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-25 00:45:26.417750 | orchestrator | Sunday 25 May 2025 00:45:26 +0000 (0:00:00.135) 0:00:19.466 ************ 2025-05-25 00:45:26.548933 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:26.552105 | orchestrator | 2025-05-25 00:45:26.552369 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-25 00:45:26.552395 | orchestrator | Sunday 25 May 2025 00:45:26 +0000 (0:00:00.133) 0:00:19.600 ************ 2025-05-25 00:45:26.850088 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:26.850516 | orchestrator | 2025-05-25 00:45:26.852087 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-25 00:45:26.853076 | orchestrator | Sunday 25 May 2025 00:45:26 +0000 (0:00:00.301) 0:00:19.902 ************ 2025-05-25 00:45:27.000095 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:27.000872 | orchestrator | 2025-05-25 00:45:27.002582 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-25 00:45:27.003215 | orchestrator | Sunday 25 May 2025 00:45:26 +0000 (0:00:00.150) 0:00:20.052 ************ 2025-05-25 00:45:27.142191 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:27.142368 | orchestrator | 2025-05-25 00:45:27.143748 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-25 00:45:27.144205 | orchestrator | Sunday 25 May 2025 00:45:27 +0000 (0:00:00.142) 0:00:20.195 ************ 2025-05-25 00:45:27.284779 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:27.287809 | orchestrator | 2025-05-25 00:45:27.292266 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-25 00:45:27.292467 | orchestrator | Sunday 25 May 2025 00:45:27 +0000 (0:00:00.140) 0:00:20.335 ************ 2025-05-25 00:45:27.421142 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:27.422231 | orchestrator | 2025-05-25 00:45:27.423854 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-25 00:45:27.424952 | orchestrator | Sunday 25 May 2025 00:45:27 +0000 (0:00:00.135) 0:00:20.471 ************ 2025-05-25 00:45:27.563618 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:27.563800 | orchestrator | 2025-05-25 00:45:27.565916 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-25 00:45:27.567010 | orchestrator | Sunday 25 May 2025 00:45:27 +0000 (0:00:00.142) 0:00:20.613 ************ 2025-05-25 00:45:27.705935 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:27.706196 | orchestrator | 2025-05-25 00:45:27.706743 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-25 00:45:27.706772 | orchestrator | Sunday 25 May 2025 00:45:27 +0000 (0:00:00.143) 0:00:20.757 ************ 2025-05-25 00:45:27.853830 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:27.854072 | orchestrator | 2025-05-25 00:45:27.855185 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-25 00:45:27.855797 | orchestrator | Sunday 25 May 2025 00:45:27 +0000 (0:00:00.148) 0:00:20.905 ************ 2025-05-25 00:45:27.995179 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:27.996517 | orchestrator | 2025-05-25 00:45:27.997702 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-25 00:45:27.998440 | orchestrator | Sunday 25 May 2025 00:45:27 +0000 (0:00:00.139) 0:00:21.045 ************ 2025-05-25 00:45:28.141784 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:28.142012 | orchestrator | 2025-05-25 00:45:28.142833 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-25 00:45:28.144062 | orchestrator | Sunday 25 May 2025 00:45:28 +0000 (0:00:00.149) 0:00:21.194 ************ 2025-05-25 00:45:28.300425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:28.302541 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:28.302600 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:28.303592 | orchestrator | 2025-05-25 00:45:28.304118 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-25 00:45:28.304526 | orchestrator | Sunday 25 May 2025 00:45:28 +0000 (0:00:00.155) 0:00:21.350 ************ 2025-05-25 00:45:28.641657 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:28.643247 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:28.644616 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:28.645231 | orchestrator | 2025-05-25 00:45:28.646308 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-25 00:45:28.647363 | orchestrator | Sunday 25 May 2025 00:45:28 +0000 (0:00:00.343) 0:00:21.693 ************ 2025-05-25 00:45:28.830395 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:28.830494 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:28.830509 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:28.830577 | orchestrator | 2025-05-25 00:45:28.831122 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-25 00:45:28.831817 | orchestrator | Sunday 25 May 2025 00:45:28 +0000 (0:00:00.184) 0:00:21.878 ************ 2025-05-25 00:45:28.989624 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:28.990829 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:28.992285 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:28.993945 | orchestrator | 2025-05-25 00:45:28.994458 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-25 00:45:28.994963 | orchestrator | Sunday 25 May 2025 00:45:28 +0000 (0:00:00.163) 0:00:22.041 ************ 2025-05-25 00:45:29.191973 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:29.192224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:29.193358 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:29.194004 | orchestrator | 2025-05-25 00:45:29.194786 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-25 00:45:29.196780 | orchestrator | Sunday 25 May 2025 00:45:29 +0000 (0:00:00.202) 0:00:22.244 ************ 2025-05-25 00:45:29.358398 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:29.358770 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:29.359575 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:29.360914 | orchestrator | 2025-05-25 00:45:29.361319 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-25 00:45:29.362178 | orchestrator | Sunday 25 May 2025 00:45:29 +0000 (0:00:00.165) 0:00:22.409 ************ 2025-05-25 00:45:29.523955 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:29.524505 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:29.525894 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:29.526560 | orchestrator | 2025-05-25 00:45:29.527437 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-25 00:45:29.528211 | orchestrator | Sunday 25 May 2025 00:45:29 +0000 (0:00:00.164) 0:00:22.574 ************ 2025-05-25 00:45:29.679540 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:29.680602 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:29.681515 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:29.683762 | orchestrator | 2025-05-25 00:45:29.683792 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-25 00:45:29.683807 | orchestrator | Sunday 25 May 2025 00:45:29 +0000 (0:00:00.157) 0:00:22.731 ************ 2025-05-25 00:45:30.187887 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:45:30.188063 | orchestrator | 2025-05-25 00:45:30.189234 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-25 00:45:30.190203 | orchestrator | Sunday 25 May 2025 00:45:30 +0000 (0:00:00.508) 0:00:23.240 ************ 2025-05-25 00:45:30.700835 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:45:30.701017 | orchestrator | 2025-05-25 00:45:30.701721 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-25 00:45:30.704206 | orchestrator | Sunday 25 May 2025 00:45:30 +0000 (0:00:00.512) 0:00:23.752 ************ 2025-05-25 00:45:30.843836 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:45:30.844074 | orchestrator | 2025-05-25 00:45:30.846189 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-25 00:45:30.846938 | orchestrator | Sunday 25 May 2025 00:45:30 +0000 (0:00:00.144) 0:00:23.896 ************ 2025-05-25 00:45:31.025574 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'vg_name': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'}) 2025-05-25 00:45:31.026066 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'vg_name': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'}) 2025-05-25 00:45:31.027597 | orchestrator | 2025-05-25 00:45:31.028399 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-25 00:45:31.029614 | orchestrator | Sunday 25 May 2025 00:45:31 +0000 (0:00:00.181) 0:00:24.077 ************ 2025-05-25 00:45:31.361404 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:31.362699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:31.366236 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:31.366965 | orchestrator | 2025-05-25 00:45:31.367562 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-25 00:45:31.368163 | orchestrator | Sunday 25 May 2025 00:45:31 +0000 (0:00:00.334) 0:00:24.412 ************ 2025-05-25 00:45:31.550132 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:31.550224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:31.550316 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:31.551672 | orchestrator | 2025-05-25 00:45:31.551693 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-25 00:45:31.551734 | orchestrator | Sunday 25 May 2025 00:45:31 +0000 (0:00:00.190) 0:00:24.603 ************ 2025-05-25 00:45:31.732869 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'})  2025-05-25 00:45:31.732997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'})  2025-05-25 00:45:31.733522 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:45:31.734377 | orchestrator | 2025-05-25 00:45:31.734853 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-25 00:45:31.735646 | orchestrator | Sunday 25 May 2025 00:45:31 +0000 (0:00:00.182) 0:00:24.785 ************ 2025-05-25 00:45:32.392060 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 00:45:32.392594 | orchestrator |  "lvm_report": { 2025-05-25 00:45:32.393702 | orchestrator |  "lv": [ 2025-05-25 00:45:32.397187 | orchestrator |  { 2025-05-25 00:45:32.397955 | orchestrator |  "lv_name": "osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62", 2025-05-25 00:45:32.398908 | orchestrator |  "vg_name": "ceph-91dc6ac0-e554-5716-a575-6858f2de7d62" 2025-05-25 00:45:32.399574 | orchestrator |  }, 2025-05-25 00:45:32.400318 | orchestrator |  { 2025-05-25 00:45:32.400967 | orchestrator |  "lv_name": "osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d", 2025-05-25 00:45:32.401651 | orchestrator |  "vg_name": "ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d" 2025-05-25 00:45:32.402131 | orchestrator |  } 2025-05-25 00:45:32.402957 | orchestrator |  ], 2025-05-25 00:45:32.403483 | orchestrator |  "pv": [ 2025-05-25 00:45:32.404299 | orchestrator |  { 2025-05-25 00:45:32.404809 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-25 00:45:32.405493 | orchestrator |  "vg_name": "ceph-91dc6ac0-e554-5716-a575-6858f2de7d62" 2025-05-25 00:45:32.406102 | orchestrator |  }, 2025-05-25 00:45:32.406737 | orchestrator |  { 2025-05-25 00:45:32.407159 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-25 00:45:32.407868 | orchestrator |  "vg_name": "ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d" 2025-05-25 00:45:32.408688 | orchestrator |  } 2025-05-25 00:45:32.409182 | orchestrator |  ] 2025-05-25 00:45:32.409542 | orchestrator |  } 2025-05-25 00:45:32.410059 | orchestrator | } 2025-05-25 00:45:32.410509 | orchestrator | 2025-05-25 00:45:32.410904 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-25 00:45:32.411517 | orchestrator | 2025-05-25 00:45:32.412203 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-25 00:45:32.413008 | orchestrator | Sunday 25 May 2025 00:45:32 +0000 (0:00:00.658) 0:00:25.444 ************ 2025-05-25 00:45:32.955239 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-25 00:45:32.955658 | orchestrator | 2025-05-25 00:45:32.956662 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-25 00:45:32.957396 | orchestrator | Sunday 25 May 2025 00:45:32 +0000 (0:00:00.563) 0:00:26.007 ************ 2025-05-25 00:45:33.195466 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:45:33.195771 | orchestrator | 2025-05-25 00:45:33.196618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:33.196841 | orchestrator | Sunday 25 May 2025 00:45:33 +0000 (0:00:00.239) 0:00:26.246 ************ 2025-05-25 00:45:33.664912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-25 00:45:33.665111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-25 00:45:33.667217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-25 00:45:33.669956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-25 00:45:33.670100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-25 00:45:33.670141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-25 00:45:33.670153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-25 00:45:33.670164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-25 00:45:33.670175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-25 00:45:33.670248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-25 00:45:33.670529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-25 00:45:33.671068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-25 00:45:33.671453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-25 00:45:33.672031 | orchestrator | 2025-05-25 00:45:33.672414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:33.672848 | orchestrator | Sunday 25 May 2025 00:45:33 +0000 (0:00:00.468) 0:00:26.715 ************ 2025-05-25 00:45:33.856111 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:33.856275 | orchestrator | 2025-05-25 00:45:33.856786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:33.857349 | orchestrator | Sunday 25 May 2025 00:45:33 +0000 (0:00:00.193) 0:00:26.908 ************ 2025-05-25 00:45:34.042246 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:34.042396 | orchestrator | 2025-05-25 00:45:34.043762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:34.046866 | orchestrator | Sunday 25 May 2025 00:45:34 +0000 (0:00:00.182) 0:00:27.091 ************ 2025-05-25 00:45:34.227910 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:34.228222 | orchestrator | 2025-05-25 00:45:34.229230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:34.230192 | orchestrator | Sunday 25 May 2025 00:45:34 +0000 (0:00:00.188) 0:00:27.280 ************ 2025-05-25 00:45:34.409689 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:34.410530 | orchestrator | 2025-05-25 00:45:34.411182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:34.411547 | orchestrator | Sunday 25 May 2025 00:45:34 +0000 (0:00:00.181) 0:00:27.462 ************ 2025-05-25 00:45:34.622985 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:34.623612 | orchestrator | 2025-05-25 00:45:34.624383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:34.626243 | orchestrator | Sunday 25 May 2025 00:45:34 +0000 (0:00:00.212) 0:00:27.674 ************ 2025-05-25 00:45:34.815579 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:34.816303 | orchestrator | 2025-05-25 00:45:34.816729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:34.817656 | orchestrator | Sunday 25 May 2025 00:45:34 +0000 (0:00:00.193) 0:00:27.868 ************ 2025-05-25 00:45:35.039625 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:35.040093 | orchestrator | 2025-05-25 00:45:35.041184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:35.042253 | orchestrator | Sunday 25 May 2025 00:45:35 +0000 (0:00:00.223) 0:00:28.091 ************ 2025-05-25 00:45:35.450859 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:35.451255 | orchestrator | 2025-05-25 00:45:35.452006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:35.453251 | orchestrator | Sunday 25 May 2025 00:45:35 +0000 (0:00:00.410) 0:00:28.502 ************ 2025-05-25 00:45:35.847161 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357) 2025-05-25 00:45:35.848657 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357) 2025-05-25 00:45:35.848715 | orchestrator | 2025-05-25 00:45:35.850196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:35.850728 | orchestrator | Sunday 25 May 2025 00:45:35 +0000 (0:00:00.397) 0:00:28.899 ************ 2025-05-25 00:45:36.294077 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a7a2bb5e-544e-42c6-9dad-0ece7cbc632c) 2025-05-25 00:45:36.294249 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a7a2bb5e-544e-42c6-9dad-0ece7cbc632c) 2025-05-25 00:45:36.295689 | orchestrator | 2025-05-25 00:45:36.298653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:36.298735 | orchestrator | Sunday 25 May 2025 00:45:36 +0000 (0:00:00.446) 0:00:29.346 ************ 2025-05-25 00:45:36.711348 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_45989edd-037d-47c1-af48-ae55f96e814d) 2025-05-25 00:45:36.711446 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_45989edd-037d-47c1-af48-ae55f96e814d) 2025-05-25 00:45:36.712751 | orchestrator | 2025-05-25 00:45:36.714498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:36.715213 | orchestrator | Sunday 25 May 2025 00:45:36 +0000 (0:00:00.416) 0:00:29.763 ************ 2025-05-25 00:45:37.143560 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_00903628-efdf-425a-bac1-d89af04936e9) 2025-05-25 00:45:37.144680 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_00903628-efdf-425a-bac1-d89af04936e9) 2025-05-25 00:45:37.145540 | orchestrator | 2025-05-25 00:45:37.148604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:37.149027 | orchestrator | Sunday 25 May 2025 00:45:37 +0000 (0:00:00.433) 0:00:30.196 ************ 2025-05-25 00:45:37.492556 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-25 00:45:37.493253 | orchestrator | 2025-05-25 00:45:37.493281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:37.493297 | orchestrator | Sunday 25 May 2025 00:45:37 +0000 (0:00:00.347) 0:00:30.543 ************ 2025-05-25 00:45:37.954891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-25 00:45:37.956001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-25 00:45:37.956809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-25 00:45:37.958144 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-25 00:45:37.959719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-25 00:45:37.960552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-25 00:45:37.961172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-25 00:45:37.962143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-25 00:45:37.963142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-25 00:45:37.963687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-25 00:45:37.964592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-25 00:45:37.965147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-25 00:45:37.965782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-25 00:45:37.966381 | orchestrator | 2025-05-25 00:45:37.966951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:37.967509 | orchestrator | Sunday 25 May 2025 00:45:37 +0000 (0:00:00.463) 0:00:31.007 ************ 2025-05-25 00:45:38.146408 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:38.146554 | orchestrator | 2025-05-25 00:45:38.146577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:38.147088 | orchestrator | Sunday 25 May 2025 00:45:38 +0000 (0:00:00.188) 0:00:31.196 ************ 2025-05-25 00:45:38.349638 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:38.350753 | orchestrator | 2025-05-25 00:45:38.351550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:38.352421 | orchestrator | Sunday 25 May 2025 00:45:38 +0000 (0:00:00.206) 0:00:31.402 ************ 2025-05-25 00:45:38.908784 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:38.908948 | orchestrator | 2025-05-25 00:45:38.910210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:38.911751 | orchestrator | Sunday 25 May 2025 00:45:38 +0000 (0:00:00.558) 0:00:31.960 ************ 2025-05-25 00:45:39.112531 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:39.112676 | orchestrator | 2025-05-25 00:45:39.113635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:39.114542 | orchestrator | Sunday 25 May 2025 00:45:39 +0000 (0:00:00.203) 0:00:32.164 ************ 2025-05-25 00:45:39.317541 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:39.317638 | orchestrator | 2025-05-25 00:45:39.320556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:39.321038 | orchestrator | Sunday 25 May 2025 00:45:39 +0000 (0:00:00.200) 0:00:32.365 ************ 2025-05-25 00:45:39.513826 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:39.514969 | orchestrator | 2025-05-25 00:45:39.516504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:39.519704 | orchestrator | Sunday 25 May 2025 00:45:39 +0000 (0:00:00.200) 0:00:32.565 ************ 2025-05-25 00:45:39.715481 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:39.715576 | orchestrator | 2025-05-25 00:45:39.715590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:39.715713 | orchestrator | Sunday 25 May 2025 00:45:39 +0000 (0:00:00.202) 0:00:32.768 ************ 2025-05-25 00:45:39.914683 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:39.914915 | orchestrator | 2025-05-25 00:45:39.915602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:39.916086 | orchestrator | Sunday 25 May 2025 00:45:39 +0000 (0:00:00.199) 0:00:32.967 ************ 2025-05-25 00:45:40.550106 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-25 00:45:40.550313 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-25 00:45:40.550933 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-25 00:45:40.551356 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-25 00:45:40.553349 | orchestrator | 2025-05-25 00:45:40.553383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:40.553397 | orchestrator | Sunday 25 May 2025 00:45:40 +0000 (0:00:00.632) 0:00:33.600 ************ 2025-05-25 00:45:40.762073 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:40.762167 | orchestrator | 2025-05-25 00:45:40.762256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:40.763417 | orchestrator | Sunday 25 May 2025 00:45:40 +0000 (0:00:00.213) 0:00:33.813 ************ 2025-05-25 00:45:40.960279 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:40.960486 | orchestrator | 2025-05-25 00:45:40.961147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:40.961750 | orchestrator | Sunday 25 May 2025 00:45:40 +0000 (0:00:00.198) 0:00:34.012 ************ 2025-05-25 00:45:41.163676 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:41.164201 | orchestrator | 2025-05-25 00:45:41.165066 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:45:41.166544 | orchestrator | Sunday 25 May 2025 00:45:41 +0000 (0:00:00.202) 0:00:34.214 ************ 2025-05-25 00:45:41.767701 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:41.768570 | orchestrator | 2025-05-25 00:45:41.769269 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-25 00:45:41.770003 | orchestrator | Sunday 25 May 2025 00:45:41 +0000 (0:00:00.602) 0:00:34.817 ************ 2025-05-25 00:45:41.895568 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:41.896942 | orchestrator | 2025-05-25 00:45:41.898533 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-25 00:45:41.899343 | orchestrator | Sunday 25 May 2025 00:45:41 +0000 (0:00:00.131) 0:00:34.948 ************ 2025-05-25 00:45:42.113246 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '86509461-9ff7-5f8d-a545-2dedda0a1471'}}) 2025-05-25 00:45:42.113543 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1f6e0dcd-8614-5501-94b8-6b816e10f3a3'}}) 2025-05-25 00:45:42.114528 | orchestrator | 2025-05-25 00:45:42.115463 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-25 00:45:42.116422 | orchestrator | Sunday 25 May 2025 00:45:42 +0000 (0:00:00.217) 0:00:35.165 ************ 2025-05-25 00:45:43.901093 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'}) 2025-05-25 00:45:43.901541 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'}) 2025-05-25 00:45:43.902403 | orchestrator | 2025-05-25 00:45:43.902864 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-25 00:45:43.903679 | orchestrator | Sunday 25 May 2025 00:45:43 +0000 (0:00:01.786) 0:00:36.952 ************ 2025-05-25 00:45:44.067879 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:44.068111 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:44.068893 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:44.069207 | orchestrator | 2025-05-25 00:45:44.070898 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-25 00:45:44.071931 | orchestrator | Sunday 25 May 2025 00:45:44 +0000 (0:00:00.168) 0:00:37.120 ************ 2025-05-25 00:45:45.354464 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'}) 2025-05-25 00:45:45.355529 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'}) 2025-05-25 00:45:45.356283 | orchestrator | 2025-05-25 00:45:45.358144 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-25 00:45:45.358198 | orchestrator | Sunday 25 May 2025 00:45:45 +0000 (0:00:01.285) 0:00:38.406 ************ 2025-05-25 00:45:45.517820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:45.518706 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:45.519945 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:45.520800 | orchestrator | 2025-05-25 00:45:45.521422 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-25 00:45:45.522778 | orchestrator | Sunday 25 May 2025 00:45:45 +0000 (0:00:00.164) 0:00:38.570 ************ 2025-05-25 00:45:45.665291 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:45.665555 | orchestrator | 2025-05-25 00:45:45.666724 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-25 00:45:45.667544 | orchestrator | Sunday 25 May 2025 00:45:45 +0000 (0:00:00.146) 0:00:38.716 ************ 2025-05-25 00:45:45.827112 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:45.827261 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:45.828251 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:45.829344 | orchestrator | 2025-05-25 00:45:45.830493 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-25 00:45:45.830817 | orchestrator | Sunday 25 May 2025 00:45:45 +0000 (0:00:00.162) 0:00:38.878 ************ 2025-05-25 00:45:46.145133 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:46.145259 | orchestrator | 2025-05-25 00:45:46.146218 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-25 00:45:46.147061 | orchestrator | Sunday 25 May 2025 00:45:46 +0000 (0:00:00.318) 0:00:39.197 ************ 2025-05-25 00:45:46.320297 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:46.320558 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:46.321239 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:46.322107 | orchestrator | 2025-05-25 00:45:46.322943 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-25 00:45:46.323436 | orchestrator | Sunday 25 May 2025 00:45:46 +0000 (0:00:00.175) 0:00:39.373 ************ 2025-05-25 00:45:46.463954 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:46.464905 | orchestrator | 2025-05-25 00:45:46.465447 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-25 00:45:46.466439 | orchestrator | Sunday 25 May 2025 00:45:46 +0000 (0:00:00.141) 0:00:39.514 ************ 2025-05-25 00:45:46.636594 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:46.637771 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:46.638097 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:46.640641 | orchestrator | 2025-05-25 00:45:46.640697 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-25 00:45:46.640767 | orchestrator | Sunday 25 May 2025 00:45:46 +0000 (0:00:00.173) 0:00:39.688 ************ 2025-05-25 00:45:46.779244 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:45:46.779885 | orchestrator | 2025-05-25 00:45:46.781014 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-25 00:45:46.783747 | orchestrator | Sunday 25 May 2025 00:45:46 +0000 (0:00:00.141) 0:00:39.829 ************ 2025-05-25 00:45:46.941517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:46.942945 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:46.944193 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:46.945228 | orchestrator | 2025-05-25 00:45:46.945960 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-25 00:45:46.946639 | orchestrator | Sunday 25 May 2025 00:45:46 +0000 (0:00:00.164) 0:00:39.994 ************ 2025-05-25 00:45:47.105809 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:47.106848 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:47.108220 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:47.109174 | orchestrator | 2025-05-25 00:45:47.110116 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-25 00:45:47.111075 | orchestrator | Sunday 25 May 2025 00:45:47 +0000 (0:00:00.163) 0:00:40.158 ************ 2025-05-25 00:45:47.273432 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:47.273768 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:47.274385 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:47.275377 | orchestrator | 2025-05-25 00:45:47.276051 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-25 00:45:47.276889 | orchestrator | Sunday 25 May 2025 00:45:47 +0000 (0:00:00.167) 0:00:40.325 ************ 2025-05-25 00:45:47.409016 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:47.409678 | orchestrator | 2025-05-25 00:45:47.410677 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-25 00:45:47.411240 | orchestrator | Sunday 25 May 2025 00:45:47 +0000 (0:00:00.136) 0:00:40.462 ************ 2025-05-25 00:45:47.553508 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:47.554123 | orchestrator | 2025-05-25 00:45:47.554524 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-25 00:45:47.555404 | orchestrator | Sunday 25 May 2025 00:45:47 +0000 (0:00:00.143) 0:00:40.605 ************ 2025-05-25 00:45:47.691426 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:47.692564 | orchestrator | 2025-05-25 00:45:47.694623 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-25 00:45:47.695575 | orchestrator | Sunday 25 May 2025 00:45:47 +0000 (0:00:00.136) 0:00:40.742 ************ 2025-05-25 00:45:47.826301 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 00:45:47.826530 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-25 00:45:47.827914 | orchestrator | } 2025-05-25 00:45:47.828665 | orchestrator | 2025-05-25 00:45:47.829425 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-25 00:45:47.829930 | orchestrator | Sunday 25 May 2025 00:45:47 +0000 (0:00:00.136) 0:00:40.878 ************ 2025-05-25 00:45:48.161979 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 00:45:48.162245 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-25 00:45:48.163370 | orchestrator | } 2025-05-25 00:45:48.164011 | orchestrator | 2025-05-25 00:45:48.164737 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-25 00:45:48.165513 | orchestrator | Sunday 25 May 2025 00:45:48 +0000 (0:00:00.336) 0:00:41.214 ************ 2025-05-25 00:45:48.303216 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 00:45:48.303720 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-25 00:45:48.304715 | orchestrator | } 2025-05-25 00:45:48.305789 | orchestrator | 2025-05-25 00:45:48.306478 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-25 00:45:48.307201 | orchestrator | Sunday 25 May 2025 00:45:48 +0000 (0:00:00.141) 0:00:41.355 ************ 2025-05-25 00:45:48.800205 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:45:48.800694 | orchestrator | 2025-05-25 00:45:48.801113 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-25 00:45:48.801630 | orchestrator | Sunday 25 May 2025 00:45:48 +0000 (0:00:00.496) 0:00:41.852 ************ 2025-05-25 00:45:49.295677 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:45:49.295784 | orchestrator | 2025-05-25 00:45:49.295863 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-25 00:45:49.296590 | orchestrator | Sunday 25 May 2025 00:45:49 +0000 (0:00:00.495) 0:00:42.347 ************ 2025-05-25 00:45:49.794269 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:45:49.794596 | orchestrator | 2025-05-25 00:45:49.795134 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-25 00:45:49.795722 | orchestrator | Sunday 25 May 2025 00:45:49 +0000 (0:00:00.497) 0:00:42.845 ************ 2025-05-25 00:45:49.933449 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:45:49.933748 | orchestrator | 2025-05-25 00:45:49.934596 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-25 00:45:49.935330 | orchestrator | Sunday 25 May 2025 00:45:49 +0000 (0:00:00.140) 0:00:42.986 ************ 2025-05-25 00:45:50.033275 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:50.033795 | orchestrator | 2025-05-25 00:45:50.036224 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-25 00:45:50.037163 | orchestrator | Sunday 25 May 2025 00:45:50 +0000 (0:00:00.098) 0:00:43.084 ************ 2025-05-25 00:45:50.142585 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:50.143078 | orchestrator | 2025-05-25 00:45:50.143782 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-25 00:45:50.144572 | orchestrator | Sunday 25 May 2025 00:45:50 +0000 (0:00:00.110) 0:00:43.195 ************ 2025-05-25 00:45:50.274112 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 00:45:50.276972 | orchestrator |  "vgs_report": { 2025-05-25 00:45:50.277013 | orchestrator |  "vg": [] 2025-05-25 00:45:50.277034 | orchestrator |  } 2025-05-25 00:45:50.277503 | orchestrator | } 2025-05-25 00:45:50.278175 | orchestrator | 2025-05-25 00:45:50.278864 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-25 00:45:50.279593 | orchestrator | Sunday 25 May 2025 00:45:50 +0000 (0:00:00.129) 0:00:43.325 ************ 2025-05-25 00:45:50.399438 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:50.399641 | orchestrator | 2025-05-25 00:45:50.400505 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-25 00:45:50.401081 | orchestrator | Sunday 25 May 2025 00:45:50 +0000 (0:00:00.127) 0:00:43.452 ************ 2025-05-25 00:45:50.525746 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:50.525912 | orchestrator | 2025-05-25 00:45:50.527041 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-25 00:45:50.527594 | orchestrator | Sunday 25 May 2025 00:45:50 +0000 (0:00:00.126) 0:00:43.578 ************ 2025-05-25 00:45:50.864593 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:50.864685 | orchestrator | 2025-05-25 00:45:50.865060 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-25 00:45:50.865979 | orchestrator | Sunday 25 May 2025 00:45:50 +0000 (0:00:00.337) 0:00:43.916 ************ 2025-05-25 00:45:51.008560 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:51.008654 | orchestrator | 2025-05-25 00:45:51.010694 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-25 00:45:51.011448 | orchestrator | Sunday 25 May 2025 00:45:51 +0000 (0:00:00.145) 0:00:44.061 ************ 2025-05-25 00:45:51.153200 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:51.153615 | orchestrator | 2025-05-25 00:45:51.154450 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-25 00:45:51.154847 | orchestrator | Sunday 25 May 2025 00:45:51 +0000 (0:00:00.143) 0:00:44.205 ************ 2025-05-25 00:45:51.283080 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:51.283178 | orchestrator | 2025-05-25 00:45:51.284453 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-25 00:45:51.285385 | orchestrator | Sunday 25 May 2025 00:45:51 +0000 (0:00:00.129) 0:00:44.334 ************ 2025-05-25 00:45:51.421714 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:51.422179 | orchestrator | 2025-05-25 00:45:51.423096 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-25 00:45:51.424190 | orchestrator | Sunday 25 May 2025 00:45:51 +0000 (0:00:00.138) 0:00:44.473 ************ 2025-05-25 00:45:51.566263 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:51.566402 | orchestrator | 2025-05-25 00:45:51.567224 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-25 00:45:51.568189 | orchestrator | Sunday 25 May 2025 00:45:51 +0000 (0:00:00.143) 0:00:44.617 ************ 2025-05-25 00:45:51.715381 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:51.716454 | orchestrator | 2025-05-25 00:45:51.716936 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-25 00:45:51.717968 | orchestrator | Sunday 25 May 2025 00:45:51 +0000 (0:00:00.150) 0:00:44.768 ************ 2025-05-25 00:45:51.850979 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:51.851967 | orchestrator | 2025-05-25 00:45:51.853668 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-25 00:45:51.854146 | orchestrator | Sunday 25 May 2025 00:45:51 +0000 (0:00:00.133) 0:00:44.902 ************ 2025-05-25 00:45:51.979687 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:51.980699 | orchestrator | 2025-05-25 00:45:51.981251 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-25 00:45:51.982564 | orchestrator | Sunday 25 May 2025 00:45:51 +0000 (0:00:00.130) 0:00:45.032 ************ 2025-05-25 00:45:52.137123 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:52.138136 | orchestrator | 2025-05-25 00:45:52.140820 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-25 00:45:52.140855 | orchestrator | Sunday 25 May 2025 00:45:52 +0000 (0:00:00.156) 0:00:45.189 ************ 2025-05-25 00:45:52.261456 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:52.261549 | orchestrator | 2025-05-25 00:45:52.262417 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-25 00:45:52.263632 | orchestrator | Sunday 25 May 2025 00:45:52 +0000 (0:00:00.124) 0:00:45.313 ************ 2025-05-25 00:45:52.391083 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:52.392921 | orchestrator | 2025-05-25 00:45:52.393456 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-25 00:45:52.394367 | orchestrator | Sunday 25 May 2025 00:45:52 +0000 (0:00:00.129) 0:00:45.442 ************ 2025-05-25 00:45:52.766282 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:52.769129 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:52.770119 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:52.770885 | orchestrator | 2025-05-25 00:45:52.771564 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-25 00:45:52.772498 | orchestrator | Sunday 25 May 2025 00:45:52 +0000 (0:00:00.374) 0:00:45.817 ************ 2025-05-25 00:45:52.938768 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:52.939553 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:52.942124 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:52.942158 | orchestrator | 2025-05-25 00:45:52.942172 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-25 00:45:52.943119 | orchestrator | Sunday 25 May 2025 00:45:52 +0000 (0:00:00.172) 0:00:45.990 ************ 2025-05-25 00:45:53.107711 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:53.107878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:53.109249 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:53.109475 | orchestrator | 2025-05-25 00:45:53.110811 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-25 00:45:53.111099 | orchestrator | Sunday 25 May 2025 00:45:53 +0000 (0:00:00.169) 0:00:46.159 ************ 2025-05-25 00:45:53.281672 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:53.282089 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:53.282576 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:53.283230 | orchestrator | 2025-05-25 00:45:53.283704 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-25 00:45:53.284180 | orchestrator | Sunday 25 May 2025 00:45:53 +0000 (0:00:00.174) 0:00:46.334 ************ 2025-05-25 00:45:53.466992 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:53.467449 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:53.467849 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:53.468797 | orchestrator | 2025-05-25 00:45:53.470078 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-25 00:45:53.470278 | orchestrator | Sunday 25 May 2025 00:45:53 +0000 (0:00:00.186) 0:00:46.520 ************ 2025-05-25 00:45:53.634819 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:53.636410 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:53.636509 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:53.637591 | orchestrator | 2025-05-25 00:45:53.638340 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-25 00:45:53.638924 | orchestrator | Sunday 25 May 2025 00:45:53 +0000 (0:00:00.166) 0:00:46.687 ************ 2025-05-25 00:45:53.807969 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:53.808148 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:53.808521 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:53.808931 | orchestrator | 2025-05-25 00:45:53.810594 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-25 00:45:53.810898 | orchestrator | Sunday 25 May 2025 00:45:53 +0000 (0:00:00.173) 0:00:46.860 ************ 2025-05-25 00:45:53.988609 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:53.988706 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:53.988802 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:53.988913 | orchestrator | 2025-05-25 00:45:53.989900 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-25 00:45:53.990131 | orchestrator | Sunday 25 May 2025 00:45:53 +0000 (0:00:00.180) 0:00:47.041 ************ 2025-05-25 00:45:54.499422 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:45:54.499546 | orchestrator | 2025-05-25 00:45:54.499702 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-25 00:45:54.500386 | orchestrator | Sunday 25 May 2025 00:45:54 +0000 (0:00:00.508) 0:00:47.550 ************ 2025-05-25 00:45:55.000827 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:45:55.000949 | orchestrator | 2025-05-25 00:45:55.001368 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-25 00:45:55.001726 | orchestrator | Sunday 25 May 2025 00:45:54 +0000 (0:00:00.500) 0:00:48.050 ************ 2025-05-25 00:45:55.361058 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:45:55.361678 | orchestrator | 2025-05-25 00:45:55.363181 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-25 00:45:55.363483 | orchestrator | Sunday 25 May 2025 00:45:55 +0000 (0:00:00.363) 0:00:48.414 ************ 2025-05-25 00:45:55.540155 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'vg_name': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'}) 2025-05-25 00:45:55.541077 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'vg_name': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'}) 2025-05-25 00:45:55.542334 | orchestrator | 2025-05-25 00:45:55.546478 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-25 00:45:55.547433 | orchestrator | Sunday 25 May 2025 00:45:55 +0000 (0:00:00.178) 0:00:48.592 ************ 2025-05-25 00:45:55.717016 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:55.717569 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:55.719651 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:55.719674 | orchestrator | 2025-05-25 00:45:55.719685 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-25 00:45:55.719889 | orchestrator | Sunday 25 May 2025 00:45:55 +0000 (0:00:00.177) 0:00:48.770 ************ 2025-05-25 00:45:55.885548 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:55.889821 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:55.889868 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:55.889883 | orchestrator | 2025-05-25 00:45:55.890410 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-25 00:45:55.891266 | orchestrator | Sunday 25 May 2025 00:45:55 +0000 (0:00:00.167) 0:00:48.937 ************ 2025-05-25 00:45:56.055571 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'})  2025-05-25 00:45:56.056085 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'})  2025-05-25 00:45:56.057185 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:45:56.058096 | orchestrator | 2025-05-25 00:45:56.062115 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-25 00:45:56.062835 | orchestrator | Sunday 25 May 2025 00:45:56 +0000 (0:00:00.170) 0:00:49.108 ************ 2025-05-25 00:45:56.895842 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 00:45:56.896202 | orchestrator |  "lvm_report": { 2025-05-25 00:45:56.900664 | orchestrator |  "lv": [ 2025-05-25 00:45:56.901688 | orchestrator |  { 2025-05-25 00:45:56.902914 | orchestrator |  "lv_name": "osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3", 2025-05-25 00:45:56.903867 | orchestrator |  "vg_name": "ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3" 2025-05-25 00:45:56.905040 | orchestrator |  }, 2025-05-25 00:45:56.906053 | orchestrator |  { 2025-05-25 00:45:56.907029 | orchestrator |  "lv_name": "osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471", 2025-05-25 00:45:56.907404 | orchestrator |  "vg_name": "ceph-86509461-9ff7-5f8d-a545-2dedda0a1471" 2025-05-25 00:45:56.908242 | orchestrator |  } 2025-05-25 00:45:56.911002 | orchestrator |  ], 2025-05-25 00:45:56.911515 | orchestrator |  "pv": [ 2025-05-25 00:45:56.912115 | orchestrator |  { 2025-05-25 00:45:56.912941 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-25 00:45:56.913546 | orchestrator |  "vg_name": "ceph-86509461-9ff7-5f8d-a545-2dedda0a1471" 2025-05-25 00:45:56.913966 | orchestrator |  }, 2025-05-25 00:45:56.914941 | orchestrator |  { 2025-05-25 00:45:56.915251 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-25 00:45:56.915999 | orchestrator |  "vg_name": "ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3" 2025-05-25 00:45:56.916859 | orchestrator |  } 2025-05-25 00:45:56.917643 | orchestrator |  ] 2025-05-25 00:45:56.918294 | orchestrator |  } 2025-05-25 00:45:56.918822 | orchestrator | } 2025-05-25 00:45:56.920800 | orchestrator | 2025-05-25 00:45:56.923110 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-25 00:45:56.923974 | orchestrator | 2025-05-25 00:45:56.924668 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-25 00:45:56.924745 | orchestrator | Sunday 25 May 2025 00:45:56 +0000 (0:00:00.838) 0:00:49.946 ************ 2025-05-25 00:45:57.145446 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-25 00:45:57.148800 | orchestrator | 2025-05-25 00:45:57.148834 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-25 00:45:57.148848 | orchestrator | Sunday 25 May 2025 00:45:57 +0000 (0:00:00.248) 0:00:50.195 ************ 2025-05-25 00:45:57.374254 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:45:57.374455 | orchestrator | 2025-05-25 00:45:57.375622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:57.376368 | orchestrator | Sunday 25 May 2025 00:45:57 +0000 (0:00:00.231) 0:00:50.426 ************ 2025-05-25 00:45:57.852663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-25 00:45:57.852765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-25 00:45:57.853489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-25 00:45:57.854708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-25 00:45:57.858092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-25 00:45:57.858988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-25 00:45:57.859896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-25 00:45:57.860441 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-25 00:45:57.861257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-25 00:45:57.862098 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-25 00:45:57.862956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-25 00:45:57.863818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-25 00:45:57.864446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-25 00:45:57.865102 | orchestrator | 2025-05-25 00:45:57.865704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:57.866081 | orchestrator | Sunday 25 May 2025 00:45:57 +0000 (0:00:00.478) 0:00:50.904 ************ 2025-05-25 00:45:58.061527 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:45:58.061751 | orchestrator | 2025-05-25 00:45:58.063336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:58.067183 | orchestrator | Sunday 25 May 2025 00:45:58 +0000 (0:00:00.209) 0:00:51.114 ************ 2025-05-25 00:45:58.258856 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:45:58.259644 | orchestrator | 2025-05-25 00:45:58.260755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:58.262477 | orchestrator | Sunday 25 May 2025 00:45:58 +0000 (0:00:00.197) 0:00:51.311 ************ 2025-05-25 00:45:58.452542 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:45:58.452746 | orchestrator | 2025-05-25 00:45:58.455220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:58.456661 | orchestrator | Sunday 25 May 2025 00:45:58 +0000 (0:00:00.193) 0:00:51.505 ************ 2025-05-25 00:45:58.665199 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:45:58.665498 | orchestrator | 2025-05-25 00:45:58.666066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:58.666595 | orchestrator | Sunday 25 May 2025 00:45:58 +0000 (0:00:00.211) 0:00:51.716 ************ 2025-05-25 00:45:58.855193 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:45:58.855597 | orchestrator | 2025-05-25 00:45:58.856972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:58.860740 | orchestrator | Sunday 25 May 2025 00:45:58 +0000 (0:00:00.191) 0:00:51.907 ************ 2025-05-25 00:45:59.461549 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:45:59.463823 | orchestrator | 2025-05-25 00:45:59.464004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:59.464868 | orchestrator | Sunday 25 May 2025 00:45:59 +0000 (0:00:00.605) 0:00:52.512 ************ 2025-05-25 00:45:59.662926 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:45:59.663955 | orchestrator | 2025-05-25 00:45:59.667938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:59.667987 | orchestrator | Sunday 25 May 2025 00:45:59 +0000 (0:00:00.202) 0:00:52.714 ************ 2025-05-25 00:45:59.854476 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:45:59.854575 | orchestrator | 2025-05-25 00:45:59.854657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:45:59.855133 | orchestrator | Sunday 25 May 2025 00:45:59 +0000 (0:00:00.192) 0:00:52.907 ************ 2025-05-25 00:46:00.260686 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1) 2025-05-25 00:46:00.261900 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1) 2025-05-25 00:46:00.264622 | orchestrator | 2025-05-25 00:46:00.264656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:46:00.264669 | orchestrator | Sunday 25 May 2025 00:46:00 +0000 (0:00:00.404) 0:00:53.311 ************ 2025-05-25 00:46:00.678910 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5104b556-d7c3-42e9-9230-39ae2abd74e9) 2025-05-25 00:46:00.679133 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5104b556-d7c3-42e9-9230-39ae2abd74e9) 2025-05-25 00:46:00.679603 | orchestrator | 2025-05-25 00:46:00.680119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:46:00.683587 | orchestrator | Sunday 25 May 2025 00:46:00 +0000 (0:00:00.418) 0:00:53.730 ************ 2025-05-25 00:46:01.114076 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a4234bd8-7c33-4d3a-bb78-5919196abab5) 2025-05-25 00:46:01.115087 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a4234bd8-7c33-4d3a-bb78-5919196abab5) 2025-05-25 00:46:01.115228 | orchestrator | 2025-05-25 00:46:01.118711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:46:01.118769 | orchestrator | Sunday 25 May 2025 00:46:01 +0000 (0:00:00.435) 0:00:54.165 ************ 2025-05-25 00:46:01.546983 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_70c7a39a-01cf-4431-b65e-7bc8a8e29825) 2025-05-25 00:46:01.547085 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_70c7a39a-01cf-4431-b65e-7bc8a8e29825) 2025-05-25 00:46:01.552675 | orchestrator | 2025-05-25 00:46:01.552709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-25 00:46:01.552722 | orchestrator | Sunday 25 May 2025 00:46:01 +0000 (0:00:00.433) 0:00:54.598 ************ 2025-05-25 00:46:01.877688 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-25 00:46:01.879446 | orchestrator | 2025-05-25 00:46:01.881590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:01.881805 | orchestrator | Sunday 25 May 2025 00:46:01 +0000 (0:00:00.330) 0:00:54.929 ************ 2025-05-25 00:46:02.325163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-25 00:46:02.325354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-25 00:46:02.328532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-25 00:46:02.330265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-25 00:46:02.330345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-25 00:46:02.331234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-25 00:46:02.333017 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-25 00:46:02.334459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-25 00:46:02.335207 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-25 00:46:02.336067 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-25 00:46:02.336805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-25 00:46:02.337943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-25 00:46:02.339727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-25 00:46:02.340231 | orchestrator | 2025-05-25 00:46:02.341359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:02.342229 | orchestrator | Sunday 25 May 2025 00:46:02 +0000 (0:00:00.446) 0:00:55.375 ************ 2025-05-25 00:46:02.905967 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:02.906180 | orchestrator | 2025-05-25 00:46:02.910614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:02.910653 | orchestrator | Sunday 25 May 2025 00:46:02 +0000 (0:00:00.581) 0:00:55.957 ************ 2025-05-25 00:46:03.119958 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:03.124083 | orchestrator | 2025-05-25 00:46:03.124135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:03.124151 | orchestrator | Sunday 25 May 2025 00:46:03 +0000 (0:00:00.213) 0:00:56.171 ************ 2025-05-25 00:46:03.342592 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:03.342833 | orchestrator | 2025-05-25 00:46:03.346864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:03.346893 | orchestrator | Sunday 25 May 2025 00:46:03 +0000 (0:00:00.222) 0:00:56.394 ************ 2025-05-25 00:46:03.550747 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:03.551097 | orchestrator | 2025-05-25 00:46:03.557626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:03.557666 | orchestrator | Sunday 25 May 2025 00:46:03 +0000 (0:00:00.207) 0:00:56.602 ************ 2025-05-25 00:46:03.762096 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:03.763210 | orchestrator | 2025-05-25 00:46:03.764413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:03.765351 | orchestrator | Sunday 25 May 2025 00:46:03 +0000 (0:00:00.211) 0:00:56.813 ************ 2025-05-25 00:46:03.959107 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:03.960464 | orchestrator | 2025-05-25 00:46:03.962431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:03.962463 | orchestrator | Sunday 25 May 2025 00:46:03 +0000 (0:00:00.197) 0:00:57.011 ************ 2025-05-25 00:46:04.176803 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:04.178834 | orchestrator | 2025-05-25 00:46:04.179379 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:04.180384 | orchestrator | Sunday 25 May 2025 00:46:04 +0000 (0:00:00.216) 0:00:57.227 ************ 2025-05-25 00:46:04.372778 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:04.374705 | orchestrator | 2025-05-25 00:46:04.377607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:04.377640 | orchestrator | Sunday 25 May 2025 00:46:04 +0000 (0:00:00.193) 0:00:57.421 ************ 2025-05-25 00:46:05.241476 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-25 00:46:05.241588 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-25 00:46:05.241714 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-25 00:46:05.244393 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-25 00:46:05.244418 | orchestrator | 2025-05-25 00:46:05.244431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:05.244445 | orchestrator | Sunday 25 May 2025 00:46:05 +0000 (0:00:00.870) 0:00:58.292 ************ 2025-05-25 00:46:05.437154 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:05.437240 | orchestrator | 2025-05-25 00:46:05.440131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:05.440157 | orchestrator | Sunday 25 May 2025 00:46:05 +0000 (0:00:00.196) 0:00:58.488 ************ 2025-05-25 00:46:06.054355 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:06.054901 | orchestrator | 2025-05-25 00:46:06.056021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:06.056875 | orchestrator | Sunday 25 May 2025 00:46:06 +0000 (0:00:00.616) 0:00:59.105 ************ 2025-05-25 00:46:06.259873 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:06.260616 | orchestrator | 2025-05-25 00:46:06.261594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-25 00:46:06.262517 | orchestrator | Sunday 25 May 2025 00:46:06 +0000 (0:00:00.205) 0:00:59.311 ************ 2025-05-25 00:46:06.454869 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:06.455179 | orchestrator | 2025-05-25 00:46:06.456515 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-25 00:46:06.457457 | orchestrator | Sunday 25 May 2025 00:46:06 +0000 (0:00:00.196) 0:00:59.507 ************ 2025-05-25 00:46:06.611966 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:06.612121 | orchestrator | 2025-05-25 00:46:06.613720 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-25 00:46:06.614763 | orchestrator | Sunday 25 May 2025 00:46:06 +0000 (0:00:00.156) 0:00:59.664 ************ 2025-05-25 00:46:06.817269 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f34e313d-bca1-5ff8-8346-de91d98588f2'}}) 2025-05-25 00:46:06.817393 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a31c7786-f287-566f-81cf-65786b8dbda6'}}) 2025-05-25 00:46:06.818322 | orchestrator | 2025-05-25 00:46:06.818959 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-25 00:46:06.820031 | orchestrator | Sunday 25 May 2025 00:46:06 +0000 (0:00:00.205) 0:00:59.869 ************ 2025-05-25 00:46:08.683091 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'}) 2025-05-25 00:46:08.686898 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'}) 2025-05-25 00:46:08.687545 | orchestrator | 2025-05-25 00:46:08.688413 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-25 00:46:08.688660 | orchestrator | Sunday 25 May 2025 00:46:08 +0000 (0:00:01.861) 0:01:01.731 ************ 2025-05-25 00:46:08.840922 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:08.841333 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:08.842537 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:08.843054 | orchestrator | 2025-05-25 00:46:08.843575 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-25 00:46:08.844171 | orchestrator | Sunday 25 May 2025 00:46:08 +0000 (0:00:00.162) 0:01:01.894 ************ 2025-05-25 00:46:10.100918 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'}) 2025-05-25 00:46:10.101022 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'}) 2025-05-25 00:46:10.102123 | orchestrator | 2025-05-25 00:46:10.104907 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-25 00:46:10.104935 | orchestrator | Sunday 25 May 2025 00:46:10 +0000 (0:00:01.256) 0:01:03.151 ************ 2025-05-25 00:46:10.281967 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:10.282144 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:10.282226 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:10.282773 | orchestrator | 2025-05-25 00:46:10.284048 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-25 00:46:10.286141 | orchestrator | Sunday 25 May 2025 00:46:10 +0000 (0:00:00.183) 0:01:03.334 ************ 2025-05-25 00:46:10.580766 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:10.581247 | orchestrator | 2025-05-25 00:46:10.582342 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-25 00:46:10.583569 | orchestrator | Sunday 25 May 2025 00:46:10 +0000 (0:00:00.299) 0:01:03.633 ************ 2025-05-25 00:46:10.767970 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:10.768750 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:10.771619 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:10.771650 | orchestrator | 2025-05-25 00:46:10.771664 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-25 00:46:10.771677 | orchestrator | Sunday 25 May 2025 00:46:10 +0000 (0:00:00.186) 0:01:03.819 ************ 2025-05-25 00:46:10.911126 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:10.911271 | orchestrator | 2025-05-25 00:46:10.912099 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-25 00:46:10.912944 | orchestrator | Sunday 25 May 2025 00:46:10 +0000 (0:00:00.144) 0:01:03.963 ************ 2025-05-25 00:46:11.071428 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:11.071592 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:11.072016 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:11.072520 | orchestrator | 2025-05-25 00:46:11.073097 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-25 00:46:11.075456 | orchestrator | Sunday 25 May 2025 00:46:11 +0000 (0:00:00.159) 0:01:04.123 ************ 2025-05-25 00:46:11.212755 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:11.212937 | orchestrator | 2025-05-25 00:46:11.212957 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-25 00:46:11.213732 | orchestrator | Sunday 25 May 2025 00:46:11 +0000 (0:00:00.141) 0:01:04.265 ************ 2025-05-25 00:46:11.388949 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:11.389126 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:11.389817 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:11.390005 | orchestrator | 2025-05-25 00:46:11.390451 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-25 00:46:11.390835 | orchestrator | Sunday 25 May 2025 00:46:11 +0000 (0:00:00.176) 0:01:04.441 ************ 2025-05-25 00:46:11.526362 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:46:11.526651 | orchestrator | 2025-05-25 00:46:11.527158 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-25 00:46:11.527587 | orchestrator | Sunday 25 May 2025 00:46:11 +0000 (0:00:00.137) 0:01:04.579 ************ 2025-05-25 00:46:11.679649 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:11.679873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:11.679897 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:11.680503 | orchestrator | 2025-05-25 00:46:11.681039 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-25 00:46:11.681669 | orchestrator | Sunday 25 May 2025 00:46:11 +0000 (0:00:00.152) 0:01:04.732 ************ 2025-05-25 00:46:11.855693 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:11.857830 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:11.857872 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:11.857885 | orchestrator | 2025-05-25 00:46:11.858600 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-25 00:46:11.859063 | orchestrator | Sunday 25 May 2025 00:46:11 +0000 (0:00:00.173) 0:01:04.905 ************ 2025-05-25 00:46:12.011352 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:12.011530 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:12.012714 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:12.014869 | orchestrator | 2025-05-25 00:46:12.014901 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-25 00:46:12.014916 | orchestrator | Sunday 25 May 2025 00:46:12 +0000 (0:00:00.156) 0:01:05.062 ************ 2025-05-25 00:46:12.152486 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:12.152580 | orchestrator | 2025-05-25 00:46:12.155189 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-25 00:46:12.155687 | orchestrator | Sunday 25 May 2025 00:46:12 +0000 (0:00:00.141) 0:01:05.204 ************ 2025-05-25 00:46:12.270642 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:12.272029 | orchestrator | 2025-05-25 00:46:12.273063 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-25 00:46:12.274139 | orchestrator | Sunday 25 May 2025 00:46:12 +0000 (0:00:00.119) 0:01:05.323 ************ 2025-05-25 00:46:12.572279 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:12.572956 | orchestrator | 2025-05-25 00:46:12.573446 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-25 00:46:12.574507 | orchestrator | Sunday 25 May 2025 00:46:12 +0000 (0:00:00.300) 0:01:05.624 ************ 2025-05-25 00:46:12.729019 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 00:46:12.729265 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-25 00:46:12.730439 | orchestrator | } 2025-05-25 00:46:12.731605 | orchestrator | 2025-05-25 00:46:12.732087 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-25 00:46:12.732322 | orchestrator | Sunday 25 May 2025 00:46:12 +0000 (0:00:00.157) 0:01:05.781 ************ 2025-05-25 00:46:12.877966 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 00:46:12.878512 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-25 00:46:12.881493 | orchestrator | } 2025-05-25 00:46:12.881536 | orchestrator | 2025-05-25 00:46:12.881549 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-25 00:46:12.881646 | orchestrator | Sunday 25 May 2025 00:46:12 +0000 (0:00:00.147) 0:01:05.929 ************ 2025-05-25 00:46:13.026935 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 00:46:13.027143 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-25 00:46:13.029021 | orchestrator | } 2025-05-25 00:46:13.029826 | orchestrator | 2025-05-25 00:46:13.030970 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-25 00:46:13.031002 | orchestrator | Sunday 25 May 2025 00:46:13 +0000 (0:00:00.149) 0:01:06.079 ************ 2025-05-25 00:46:13.538564 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:46:13.539858 | orchestrator | 2025-05-25 00:46:13.540387 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-25 00:46:13.543058 | orchestrator | Sunday 25 May 2025 00:46:13 +0000 (0:00:00.511) 0:01:06.590 ************ 2025-05-25 00:46:14.029765 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:46:14.029924 | orchestrator | 2025-05-25 00:46:14.030456 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-25 00:46:14.030947 | orchestrator | Sunday 25 May 2025 00:46:14 +0000 (0:00:00.490) 0:01:07.081 ************ 2025-05-25 00:46:14.528629 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:46:14.528959 | orchestrator | 2025-05-25 00:46:14.530222 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-25 00:46:14.530640 | orchestrator | Sunday 25 May 2025 00:46:14 +0000 (0:00:00.499) 0:01:07.581 ************ 2025-05-25 00:46:14.690589 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:46:14.691394 | orchestrator | 2025-05-25 00:46:14.691634 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-25 00:46:14.692819 | orchestrator | Sunday 25 May 2025 00:46:14 +0000 (0:00:00.162) 0:01:07.743 ************ 2025-05-25 00:46:14.810100 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:14.810203 | orchestrator | 2025-05-25 00:46:14.810585 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-25 00:46:14.811632 | orchestrator | Sunday 25 May 2025 00:46:14 +0000 (0:00:00.118) 0:01:07.861 ************ 2025-05-25 00:46:14.917063 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:14.917758 | orchestrator | 2025-05-25 00:46:14.919252 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-25 00:46:14.921613 | orchestrator | Sunday 25 May 2025 00:46:14 +0000 (0:00:00.108) 0:01:07.970 ************ 2025-05-25 00:46:15.229985 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 00:46:15.231132 | orchestrator |  "vgs_report": { 2025-05-25 00:46:15.231930 | orchestrator |  "vg": [] 2025-05-25 00:46:15.232863 | orchestrator |  } 2025-05-25 00:46:15.233601 | orchestrator | } 2025-05-25 00:46:15.234642 | orchestrator | 2025-05-25 00:46:15.235544 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-25 00:46:15.236095 | orchestrator | Sunday 25 May 2025 00:46:15 +0000 (0:00:00.312) 0:01:08.282 ************ 2025-05-25 00:46:15.367742 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:15.368450 | orchestrator | 2025-05-25 00:46:15.368818 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-25 00:46:15.369468 | orchestrator | Sunday 25 May 2025 00:46:15 +0000 (0:00:00.137) 0:01:08.420 ************ 2025-05-25 00:46:15.502005 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:15.502636 | orchestrator | 2025-05-25 00:46:15.503074 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-25 00:46:15.503656 | orchestrator | Sunday 25 May 2025 00:46:15 +0000 (0:00:00.133) 0:01:08.554 ************ 2025-05-25 00:46:15.648529 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:15.648709 | orchestrator | 2025-05-25 00:46:15.649225 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-25 00:46:15.649998 | orchestrator | Sunday 25 May 2025 00:46:15 +0000 (0:00:00.146) 0:01:08.701 ************ 2025-05-25 00:46:15.785805 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:15.786133 | orchestrator | 2025-05-25 00:46:15.786478 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-25 00:46:15.787580 | orchestrator | Sunday 25 May 2025 00:46:15 +0000 (0:00:00.137) 0:01:08.839 ************ 2025-05-25 00:46:15.927248 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:15.927350 | orchestrator | 2025-05-25 00:46:15.927700 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-25 00:46:15.928161 | orchestrator | Sunday 25 May 2025 00:46:15 +0000 (0:00:00.138) 0:01:08.978 ************ 2025-05-25 00:46:16.070832 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:16.071586 | orchestrator | 2025-05-25 00:46:16.073120 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-25 00:46:16.074478 | orchestrator | Sunday 25 May 2025 00:46:16 +0000 (0:00:00.145) 0:01:09.124 ************ 2025-05-25 00:46:16.216650 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:16.218126 | orchestrator | 2025-05-25 00:46:16.219178 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-25 00:46:16.220077 | orchestrator | Sunday 25 May 2025 00:46:16 +0000 (0:00:00.144) 0:01:09.268 ************ 2025-05-25 00:46:16.361644 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:16.362127 | orchestrator | 2025-05-25 00:46:16.363253 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-25 00:46:16.364519 | orchestrator | Sunday 25 May 2025 00:46:16 +0000 (0:00:00.144) 0:01:09.412 ************ 2025-05-25 00:46:16.492076 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:16.492937 | orchestrator | 2025-05-25 00:46:16.494563 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-25 00:46:16.494793 | orchestrator | Sunday 25 May 2025 00:46:16 +0000 (0:00:00.131) 0:01:09.544 ************ 2025-05-25 00:46:16.630374 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:16.632236 | orchestrator | 2025-05-25 00:46:16.632620 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-25 00:46:16.633681 | orchestrator | Sunday 25 May 2025 00:46:16 +0000 (0:00:00.138) 0:01:09.683 ************ 2025-05-25 00:46:16.765695 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:16.767083 | orchestrator | 2025-05-25 00:46:16.768181 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-25 00:46:16.769073 | orchestrator | Sunday 25 May 2025 00:46:16 +0000 (0:00:00.135) 0:01:09.818 ************ 2025-05-25 00:46:17.087444 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:17.088142 | orchestrator | 2025-05-25 00:46:17.090470 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-25 00:46:17.090495 | orchestrator | Sunday 25 May 2025 00:46:17 +0000 (0:00:00.319) 0:01:10.138 ************ 2025-05-25 00:46:17.238401 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:17.239769 | orchestrator | 2025-05-25 00:46:17.240008 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-25 00:46:17.241142 | orchestrator | Sunday 25 May 2025 00:46:17 +0000 (0:00:00.151) 0:01:10.290 ************ 2025-05-25 00:46:17.370763 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:17.372370 | orchestrator | 2025-05-25 00:46:17.373106 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-25 00:46:17.374746 | orchestrator | Sunday 25 May 2025 00:46:17 +0000 (0:00:00.133) 0:01:10.423 ************ 2025-05-25 00:46:17.527100 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:17.528041 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:17.529205 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:17.530214 | orchestrator | 2025-05-25 00:46:17.531090 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-25 00:46:17.531475 | orchestrator | Sunday 25 May 2025 00:46:17 +0000 (0:00:00.155) 0:01:10.579 ************ 2025-05-25 00:46:17.686583 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:17.687262 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:17.688432 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:17.689581 | orchestrator | 2025-05-25 00:46:17.690985 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-25 00:46:17.692664 | orchestrator | Sunday 25 May 2025 00:46:17 +0000 (0:00:00.159) 0:01:10.739 ************ 2025-05-25 00:46:17.858682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:17.859640 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:17.860956 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:17.862438 | orchestrator | 2025-05-25 00:46:17.865633 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-25 00:46:17.866226 | orchestrator | Sunday 25 May 2025 00:46:17 +0000 (0:00:00.170) 0:01:10.910 ************ 2025-05-25 00:46:18.018510 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:18.019197 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:18.021252 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:18.022242 | orchestrator | 2025-05-25 00:46:18.023638 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-25 00:46:18.024609 | orchestrator | Sunday 25 May 2025 00:46:18 +0000 (0:00:00.161) 0:01:11.071 ************ 2025-05-25 00:46:18.180423 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:18.181117 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:18.181835 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:18.182926 | orchestrator | 2025-05-25 00:46:18.183398 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-25 00:46:18.183946 | orchestrator | Sunday 25 May 2025 00:46:18 +0000 (0:00:00.161) 0:01:11.232 ************ 2025-05-25 00:46:18.342408 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:18.343272 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:18.345212 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:18.346727 | orchestrator | 2025-05-25 00:46:18.347712 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-25 00:46:18.347996 | orchestrator | Sunday 25 May 2025 00:46:18 +0000 (0:00:00.161) 0:01:11.394 ************ 2025-05-25 00:46:18.516143 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:18.516998 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:18.517717 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:18.519015 | orchestrator | 2025-05-25 00:46:18.521509 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-25 00:46:18.521975 | orchestrator | Sunday 25 May 2025 00:46:18 +0000 (0:00:00.174) 0:01:11.568 ************ 2025-05-25 00:46:18.705704 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:18.705885 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:18.706160 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:18.706628 | orchestrator | 2025-05-25 00:46:18.707084 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-25 00:46:18.707417 | orchestrator | Sunday 25 May 2025 00:46:18 +0000 (0:00:00.189) 0:01:11.758 ************ 2025-05-25 00:46:19.382347 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:46:19.382740 | orchestrator | 2025-05-25 00:46:19.388464 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-25 00:46:19.388511 | orchestrator | Sunday 25 May 2025 00:46:19 +0000 (0:00:00.674) 0:01:12.432 ************ 2025-05-25 00:46:19.881566 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:46:19.882268 | orchestrator | 2025-05-25 00:46:19.882599 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-25 00:46:19.883802 | orchestrator | Sunday 25 May 2025 00:46:19 +0000 (0:00:00.501) 0:01:12.934 ************ 2025-05-25 00:46:20.032095 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:46:20.032279 | orchestrator | 2025-05-25 00:46:20.033136 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-25 00:46:20.034005 | orchestrator | Sunday 25 May 2025 00:46:20 +0000 (0:00:00.149) 0:01:13.083 ************ 2025-05-25 00:46:20.214523 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'vg_name': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'}) 2025-05-25 00:46:20.215020 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'vg_name': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'}) 2025-05-25 00:46:20.215522 | orchestrator | 2025-05-25 00:46:20.216073 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-25 00:46:20.216812 | orchestrator | Sunday 25 May 2025 00:46:20 +0000 (0:00:00.183) 0:01:13.267 ************ 2025-05-25 00:46:20.390655 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:20.390776 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:20.390895 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:20.391788 | orchestrator | 2025-05-25 00:46:20.392102 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-25 00:46:20.392416 | orchestrator | Sunday 25 May 2025 00:46:20 +0000 (0:00:00.176) 0:01:13.443 ************ 2025-05-25 00:46:20.568482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:20.568580 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:20.568982 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:20.569315 | orchestrator | 2025-05-25 00:46:20.569982 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-25 00:46:20.570517 | orchestrator | Sunday 25 May 2025 00:46:20 +0000 (0:00:00.177) 0:01:13.620 ************ 2025-05-25 00:46:20.746478 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'})  2025-05-25 00:46:20.747666 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'})  2025-05-25 00:46:20.748854 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:20.749574 | orchestrator | 2025-05-25 00:46:20.750357 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-25 00:46:20.751078 | orchestrator | Sunday 25 May 2025 00:46:20 +0000 (0:00:00.178) 0:01:13.799 ************ 2025-05-25 00:46:21.148953 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 00:46:21.149591 | orchestrator |  "lvm_report": { 2025-05-25 00:46:21.150842 | orchestrator |  "lv": [ 2025-05-25 00:46:21.151849 | orchestrator |  { 2025-05-25 00:46:21.152999 | orchestrator |  "lv_name": "osd-block-a31c7786-f287-566f-81cf-65786b8dbda6", 2025-05-25 00:46:21.153749 | orchestrator |  "vg_name": "ceph-a31c7786-f287-566f-81cf-65786b8dbda6" 2025-05-25 00:46:21.155364 | orchestrator |  }, 2025-05-25 00:46:21.155846 | orchestrator |  { 2025-05-25 00:46:21.156096 | orchestrator |  "lv_name": "osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2", 2025-05-25 00:46:21.156873 | orchestrator |  "vg_name": "ceph-f34e313d-bca1-5ff8-8346-de91d98588f2" 2025-05-25 00:46:21.157756 | orchestrator |  } 2025-05-25 00:46:21.158376 | orchestrator |  ], 2025-05-25 00:46:21.159264 | orchestrator |  "pv": [ 2025-05-25 00:46:21.159814 | orchestrator |  { 2025-05-25 00:46:21.160494 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-25 00:46:21.161431 | orchestrator |  "vg_name": "ceph-f34e313d-bca1-5ff8-8346-de91d98588f2" 2025-05-25 00:46:21.161962 | orchestrator |  }, 2025-05-25 00:46:21.162518 | orchestrator |  { 2025-05-25 00:46:21.163465 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-25 00:46:21.163737 | orchestrator |  "vg_name": "ceph-a31c7786-f287-566f-81cf-65786b8dbda6" 2025-05-25 00:46:21.164559 | orchestrator |  } 2025-05-25 00:46:21.165154 | orchestrator |  ] 2025-05-25 00:46:21.165716 | orchestrator |  } 2025-05-25 00:46:21.165737 | orchestrator | } 2025-05-25 00:46:21.166087 | orchestrator | 2025-05-25 00:46:21.166687 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:46:21.167657 | orchestrator | 2025-05-25 00:46:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:46:21.167680 | orchestrator | 2025-05-25 00:46:21 | INFO  | Please wait and do not abort execution. 2025-05-25 00:46:21.168652 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-25 00:46:21.168823 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-25 00:46:21.169096 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-25 00:46:21.169688 | orchestrator | 2025-05-25 00:46:21.170523 | orchestrator | 2025-05-25 00:46:21.170960 | orchestrator | 2025-05-25 00:46:21.171167 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:46:21.171733 | orchestrator | Sunday 25 May 2025 00:46:21 +0000 (0:00:00.400) 0:01:14.199 ************ 2025-05-25 00:46:21.172067 | orchestrator | =============================================================================== 2025-05-25 00:46:21.172700 | orchestrator | Create block VGs -------------------------------------------------------- 5.86s 2025-05-25 00:46:21.173027 | orchestrator | Create block LVs -------------------------------------------------------- 3.95s 2025-05-25 00:46:21.173851 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.99s 2025-05-25 00:46:21.174120 | orchestrator | Print LVM report data --------------------------------------------------- 1.90s 2025-05-25 00:46:21.174588 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.69s 2025-05-25 00:46:21.175191 | orchestrator | Add known links to the list of available block devices ------------------ 1.64s 2025-05-25 00:46:21.175755 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2025-05-25 00:46:21.175996 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.50s 2025-05-25 00:46:21.176538 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.49s 2025-05-25 00:46:21.176850 | orchestrator | Add known partitions to the list of available block devices ------------- 1.37s 2025-05-25 00:46:21.177280 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.07s 2025-05-25 00:46:21.177816 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2025-05-25 00:46:21.177950 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2025-05-25 00:46:21.178299 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2025-05-25 00:46:21.178801 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.69s 2025-05-25 00:46:21.179389 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.69s 2025-05-25 00:46:21.179573 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.68s 2025-05-25 00:46:21.179873 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-05-25 00:46:21.180204 | orchestrator | Combine JSON from _lvs_cmd_output/_pvs_cmd_output ----------------------- 0.66s 2025-05-25 00:46:21.180690 | orchestrator | Print number of OSDs wanted per WAL VG ---------------------------------- 0.64s 2025-05-25 00:46:23.120911 | orchestrator | 2025-05-25 00:46:23 | INFO  | Task 8c8dbbb1-c2a7-48c5-8b24-e9608c8d69ad (facts) was prepared for execution. 2025-05-25 00:46:23.121029 | orchestrator | 2025-05-25 00:46:23 | INFO  | It takes a moment until task 8c8dbbb1-c2a7-48c5-8b24-e9608c8d69ad (facts) has been started and output is visible here. 2025-05-25 00:46:26.215709 | orchestrator | 2025-05-25 00:46:26.218185 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-25 00:46:26.219723 | orchestrator | 2025-05-25 00:46:26.221604 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-25 00:46:26.222904 | orchestrator | Sunday 25 May 2025 00:46:26 +0000 (0:00:00.214) 0:00:00.214 ************ 2025-05-25 00:46:27.182424 | orchestrator | ok: [testbed-manager] 2025-05-25 00:46:27.183445 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:46:27.184661 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:46:27.187581 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:46:27.187604 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:46:27.188767 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:46:27.190088 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:46:27.191629 | orchestrator | 2025-05-25 00:46:27.192597 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-25 00:46:27.193658 | orchestrator | Sunday 25 May 2025 00:46:27 +0000 (0:00:00.967) 0:00:01.182 ************ 2025-05-25 00:46:27.349874 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:46:27.431399 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:46:27.510710 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:46:27.590763 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:46:27.667138 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:46:28.377728 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:46:28.378165 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:28.379449 | orchestrator | 2025-05-25 00:46:28.380115 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-25 00:46:28.381352 | orchestrator | 2025-05-25 00:46:28.382494 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-25 00:46:28.382973 | orchestrator | Sunday 25 May 2025 00:46:28 +0000 (0:00:01.197) 0:00:02.379 ************ 2025-05-25 00:46:32.764021 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:46:32.764848 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:46:32.766218 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:46:32.767109 | orchestrator | ok: [testbed-manager] 2025-05-25 00:46:32.768138 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:46:32.769936 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:46:32.770178 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:46:32.771174 | orchestrator | 2025-05-25 00:46:32.771746 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-25 00:46:32.772729 | orchestrator | 2025-05-25 00:46:32.773547 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-25 00:46:32.774164 | orchestrator | Sunday 25 May 2025 00:46:32 +0000 (0:00:04.387) 0:00:06.767 ************ 2025-05-25 00:46:33.072927 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:46:33.152779 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:46:33.225058 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:46:33.299635 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:46:33.378253 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:46:33.420474 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:46:33.421039 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:46:33.421066 | orchestrator | 2025-05-25 00:46:33.421259 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:46:33.422111 | orchestrator | 2025-05-25 00:46:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-25 00:46:33.422139 | orchestrator | 2025-05-25 00:46:33 | INFO  | Please wait and do not abort execution. 2025-05-25 00:46:33.422538 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:46:33.422619 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:46:33.422718 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:46:33.423712 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:46:33.423808 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:46:33.424090 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:46:33.424373 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:46:33.424990 | orchestrator | 2025-05-25 00:46:33.427692 | orchestrator | Sunday 25 May 2025 00:46:33 +0000 (0:00:00.652) 0:00:07.419 ************ 2025-05-25 00:46:33.427723 | orchestrator | =============================================================================== 2025-05-25 00:46:33.428108 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.39s 2025-05-25 00:46:33.428998 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2025-05-25 00:46:33.429250 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.97s 2025-05-25 00:46:33.429719 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.65s 2025-05-25 00:46:33.952839 | orchestrator | 2025-05-25 00:46:33.956236 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun May 25 00:46:33 UTC 2025 2025-05-25 00:46:33.956272 | orchestrator | 2025-05-25 00:46:35.333020 | orchestrator | 2025-05-25 00:46:35 | INFO  | Collection nutshell is prepared for execution 2025-05-25 00:46:35.334747 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [0] - dotfiles 2025-05-25 00:46:35.337895 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [0] - homer 2025-05-25 00:46:35.337924 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [0] - netdata 2025-05-25 00:46:35.337937 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [0] - openstackclient 2025-05-25 00:46:35.338445 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [0] - phpmyadmin 2025-05-25 00:46:35.338554 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [0] - common 2025-05-25 00:46:35.339581 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [1] -- loadbalancer 2025-05-25 00:46:35.339658 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [2] --- opensearch 2025-05-25 00:46:35.339948 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [2] --- mariadb-ng 2025-05-25 00:46:35.339971 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [3] ---- horizon 2025-05-25 00:46:35.339983 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [3] ---- keystone 2025-05-25 00:46:35.340100 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [4] ----- neutron 2025-05-25 00:46:35.340403 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [5] ------ wait-for-nova 2025-05-25 00:46:35.340428 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [5] ------ octavia 2025-05-25 00:46:35.340916 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [4] ----- barbican 2025-05-25 00:46:35.342707 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [4] ----- designate 2025-05-25 00:46:35.342732 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [4] ----- ironic 2025-05-25 00:46:35.342743 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [4] ----- placement 2025-05-25 00:46:35.342754 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [4] ----- magnum 2025-05-25 00:46:35.342765 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [1] -- openvswitch 2025-05-25 00:46:35.342776 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [2] --- ovn 2025-05-25 00:46:35.342787 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [1] -- memcached 2025-05-25 00:46:35.342798 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [1] -- redis 2025-05-25 00:46:35.342809 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [1] -- rabbitmq-ng 2025-05-25 00:46:35.342820 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [0] - kubernetes 2025-05-25 00:46:35.342830 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [1] -- kubeconfig 2025-05-25 00:46:35.342841 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [1] -- copy-kubeconfig 2025-05-25 00:46:35.342853 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [0] - ceph 2025-05-25 00:46:35.343464 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [1] -- ceph-pools 2025-05-25 00:46:35.343763 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [2] --- copy-ceph-keys 2025-05-25 00:46:35.343785 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [3] ---- cephclient 2025-05-25 00:46:35.343796 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-25 00:46:35.343995 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [4] ----- wait-for-keystone 2025-05-25 00:46:35.344016 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-25 00:46:35.344353 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [5] ------ glance 2025-05-25 00:46:35.344382 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [5] ------ cinder 2025-05-25 00:46:35.344442 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [5] ------ nova 2025-05-25 00:46:35.344667 | orchestrator | 2025-05-25 00:46:35 | INFO  | A [4] ----- prometheus 2025-05-25 00:46:35.344773 | orchestrator | 2025-05-25 00:46:35 | INFO  | D [5] ------ grafana 2025-05-25 00:46:35.472696 | orchestrator | 2025-05-25 00:46:35 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-25 00:46:35.472785 | orchestrator | 2025-05-25 00:46:35 | INFO  | Tasks are running in the background 2025-05-25 00:46:37.240439 | orchestrator | 2025-05-25 00:46:37 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-25 00:46:39.326469 | orchestrator | 2025-05-25 00:46:39 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:46:39.326640 | orchestrator | 2025-05-25 00:46:39 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state STARTED 2025-05-25 00:46:39.327309 | orchestrator | 2025-05-25 00:46:39 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:46:39.327757 | orchestrator | 2025-05-25 00:46:39 | INFO  | Task 68fc4015-9a6b-4d57-a2c4-184618b8b7f1 is in state STARTED 2025-05-25 00:46:39.328248 | orchestrator | 2025-05-25 00:46:39 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:46:39.328746 | orchestrator | 2025-05-25 00:46:39 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:46:39.328809 | orchestrator | 2025-05-25 00:46:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:46:42.363517 | orchestrator | 2025-05-25 00:46:42 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:46:42.368063 | orchestrator | 2025-05-25 00:46:42 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state STARTED 2025-05-25 00:46:42.369675 | orchestrator | 2025-05-25 00:46:42 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:46:42.370788 | orchestrator | 2025-05-25 00:46:42 | INFO  | Task 68fc4015-9a6b-4d57-a2c4-184618b8b7f1 is in state STARTED 2025-05-25 00:46:42.373421 | orchestrator | 2025-05-25 00:46:42 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:46:42.373451 | orchestrator | 2025-05-25 00:46:42 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:46:42.373464 | orchestrator | 2025-05-25 00:46:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:46:45.399999 | orchestrator | 2025-05-25 00:46:45 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:46:45.400141 | orchestrator | 2025-05-25 00:46:45 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state STARTED 2025-05-25 00:46:45.400221 | orchestrator | 2025-05-25 00:46:45 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:46:45.400651 | orchestrator | 2025-05-25 00:46:45 | INFO  | Task 68fc4015-9a6b-4d57-a2c4-184618b8b7f1 is in state STARTED 2025-05-25 00:46:45.408215 | orchestrator | 2025-05-25 00:46:45 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:46:45.408335 | orchestrator | 2025-05-25 00:46:45 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:46:45.408351 | orchestrator | 2025-05-25 00:46:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:46:48.463876 | orchestrator | 2025-05-25 00:46:48 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:46:48.468575 | orchestrator | 2025-05-25 00:46:48 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state STARTED 2025-05-25 00:46:48.468643 | orchestrator | 2025-05-25 00:46:48 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:46:48.471736 | orchestrator | 2025-05-25 00:46:48 | INFO  | Task 68fc4015-9a6b-4d57-a2c4-184618b8b7f1 is in state STARTED 2025-05-25 00:46:48.473050 | orchestrator | 2025-05-25 00:46:48 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:46:48.475243 | orchestrator | 2025-05-25 00:46:48 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:46:48.475319 | orchestrator | 2025-05-25 00:46:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:46:51.517230 | orchestrator | 2025-05-25 00:46:51 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:46:51.522167 | orchestrator | 2025-05-25 00:46:51 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state STARTED 2025-05-25 00:46:51.525608 | orchestrator | 2025-05-25 00:46:51 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:46:51.529101 | orchestrator | 2025-05-25 00:46:51 | INFO  | Task 68fc4015-9a6b-4d57-a2c4-184618b8b7f1 is in state STARTED 2025-05-25 00:46:51.531159 | orchestrator | 2025-05-25 00:46:51 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:46:51.532744 | orchestrator | 2025-05-25 00:46:51 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:46:51.532770 | orchestrator | 2025-05-25 00:46:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:46:54.599205 | orchestrator | 2025-05-25 00:46:54 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:46:54.599558 | orchestrator | 2025-05-25 00:46:54 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state STARTED 2025-05-25 00:46:54.601200 | orchestrator | 2025-05-25 00:46:54 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:46:54.602619 | orchestrator | 2025-05-25 00:46:54.602672 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-25 00:46:54.602687 | orchestrator | 2025-05-25 00:46:54.602699 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-25 00:46:54.602711 | orchestrator | Sunday 25 May 2025 00:46:43 +0000 (0:00:00.286) 0:00:00.286 ************ 2025-05-25 00:46:54.602723 | orchestrator | changed: [testbed-manager] 2025-05-25 00:46:54.602735 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:46:54.602746 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:46:54.602757 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:46:54.602768 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:46:54.602780 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:46:54.602799 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:46:54.602816 | orchestrator | 2025-05-25 00:46:54.602833 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-25 00:46:54.602877 | orchestrator | Sunday 25 May 2025 00:46:46 +0000 (0:00:03.522) 0:00:03.809 ************ 2025-05-25 00:46:54.602900 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-25 00:46:54.602920 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-25 00:46:54.602939 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-25 00:46:54.602951 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-25 00:46:54.602962 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-25 00:46:54.602973 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-25 00:46:54.602983 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-25 00:46:54.603024 | orchestrator | 2025-05-25 00:46:54.603036 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-25 00:46:54.603046 | orchestrator | Sunday 25 May 2025 00:46:48 +0000 (0:00:01.550) 0:00:05.359 ************ 2025-05-25 00:46:54.603062 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 00:46:47.467408', 'end': '2025-05-25 00:46:47.471183', 'delta': '0:00:00.003775', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 00:46:54.603087 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 00:46:47.423243', 'end': '2025-05-25 00:46:47.432384', 'delta': '0:00:00.009141', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 00:46:54.603103 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 00:46:47.410013', 'end': '2025-05-25 00:46:47.418608', 'delta': '0:00:00.008595', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 00:46:54.603142 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 00:46:47.539729', 'end': '2025-05-25 00:46:47.548189', 'delta': '0:00:00.008460', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 00:46:54.603154 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 00:46:47.681151', 'end': '2025-05-25 00:46:47.689439', 'delta': '0:00:00.008288', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 00:46:54.603176 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 00:46:47.895945', 'end': '2025-05-25 00:46:47.904643', 'delta': '0:00:00.008698', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 00:46:54.603196 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-25 00:46:48.171813', 'end': '2025-05-25 00:46:48.180130', 'delta': '0:00:00.008317', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-25 00:46:54.603216 | orchestrator | 2025-05-25 00:46:54.603335 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-25 00:46:54.603361 | orchestrator | Sunday 25 May 2025 00:46:50 +0000 (0:00:02.130) 0:00:07.490 ************ 2025-05-25 00:46:54.603381 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-25 00:46:54.603401 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-25 00:46:54.603421 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-25 00:46:54.603440 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-25 00:46:54.603462 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-25 00:46:54.603485 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-25 00:46:54.603510 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-25 00:46:54.603530 | orchestrator | 2025-05-25 00:46:54.603551 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:46:54.603574 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:46:54.603594 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:46:54.603613 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:46:54.603648 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:46:54.603661 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:46:54.603672 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:46:54.603695 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:46:54.603706 | orchestrator | 2025-05-25 00:46:54.603717 | orchestrator | Sunday 25 May 2025 00:46:52 +0000 (0:00:02.278) 0:00:09.768 ************ 2025-05-25 00:46:54.603728 | orchestrator | =============================================================================== 2025-05-25 00:46:54.603748 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.52s 2025-05-25 00:46:54.603766 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.28s 2025-05-25 00:46:54.603785 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.13s 2025-05-25 00:46:54.603867 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.55s 2025-05-25 00:46:54.603936 | orchestrator | 2025-05-25 00:46:54 | INFO  | Task 68fc4015-9a6b-4d57-a2c4-184618b8b7f1 is in state SUCCESS 2025-05-25 00:46:54.604058 | orchestrator | 2025-05-25 00:46:54 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:46:54.604074 | orchestrator | 2025-05-25 00:46:54 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:46:54.604827 | orchestrator | 2025-05-25 00:46:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:46:57.655606 | orchestrator | 2025-05-25 00:46:57 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:46:57.657970 | orchestrator | 2025-05-25 00:46:57 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state STARTED 2025-05-25 00:46:57.658109 | orchestrator | 2025-05-25 00:46:57 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:46:57.658128 | orchestrator | 2025-05-25 00:46:57 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:46:57.661127 | orchestrator | 2025-05-25 00:46:57 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:46:57.661835 | orchestrator | 2025-05-25 00:46:57 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:46:57.662097 | orchestrator | 2025-05-25 00:46:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:00.730479 | orchestrator | 2025-05-25 00:47:00 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:00.732956 | orchestrator | 2025-05-25 00:47:00 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state STARTED 2025-05-25 00:47:00.746160 | orchestrator | 2025-05-25 00:47:00 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:00.750799 | orchestrator | 2025-05-25 00:47:00 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:00.778124 | orchestrator | 2025-05-25 00:47:00 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:00.781113 | orchestrator | 2025-05-25 00:47:00 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:47:00.782208 | orchestrator | 2025-05-25 00:47:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:03.835637 | orchestrator | 2025-05-25 00:47:03 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:03.838614 | orchestrator | 2025-05-25 00:47:03 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state STARTED 2025-05-25 00:47:03.841943 | orchestrator | 2025-05-25 00:47:03 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:03.842134 | orchestrator | 2025-05-25 00:47:03 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:03.842819 | orchestrator | 2025-05-25 00:47:03 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:03.843964 | orchestrator | 2025-05-25 00:47:03 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:47:03.843993 | orchestrator | 2025-05-25 00:47:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:06.971010 | orchestrator | 2025-05-25 00:47:06 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:06.975401 | orchestrator | 2025-05-25 00:47:06 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state STARTED 2025-05-25 00:47:06.977270 | orchestrator | 2025-05-25 00:47:06 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:06.978832 | orchestrator | 2025-05-25 00:47:06 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:06.979381 | orchestrator | 2025-05-25 00:47:06 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:06.980302 | orchestrator | 2025-05-25 00:47:06 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:47:06.980350 | orchestrator | 2025-05-25 00:47:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:10.082307 | orchestrator | 2025-05-25 00:47:10 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:10.082406 | orchestrator | 2025-05-25 00:47:10 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state STARTED 2025-05-25 00:47:10.083674 | orchestrator | 2025-05-25 00:47:10 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:10.084709 | orchestrator | 2025-05-25 00:47:10 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:10.089635 | orchestrator | 2025-05-25 00:47:10 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:10.091784 | orchestrator | 2025-05-25 00:47:10 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:47:10.091814 | orchestrator | 2025-05-25 00:47:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:13.147187 | orchestrator | 2025-05-25 00:47:13 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:13.147328 | orchestrator | 2025-05-25 00:47:13 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state STARTED 2025-05-25 00:47:13.147427 | orchestrator | 2025-05-25 00:47:13 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:13.148743 | orchestrator | 2025-05-25 00:47:13 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:13.151386 | orchestrator | 2025-05-25 00:47:13 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:13.151445 | orchestrator | 2025-05-25 00:47:13 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:47:13.151463 | orchestrator | 2025-05-25 00:47:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:16.189766 | orchestrator | 2025-05-25 00:47:16 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:16.191540 | orchestrator | 2025-05-25 00:47:16 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state STARTED 2025-05-25 00:47:16.193050 | orchestrator | 2025-05-25 00:47:16 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:16.193088 | orchestrator | 2025-05-25 00:47:16 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:16.195213 | orchestrator | 2025-05-25 00:47:16 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:16.197547 | orchestrator | 2025-05-25 00:47:16 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:47:16.197582 | orchestrator | 2025-05-25 00:47:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:19.241546 | orchestrator | 2025-05-25 00:47:19 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:19.241658 | orchestrator | 2025-05-25 00:47:19 | INFO  | Task d8793eb3-ec28-40c2-a84d-cf4d5af53678 is in state SUCCESS 2025-05-25 00:47:19.244929 | orchestrator | 2025-05-25 00:47:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:19.245908 | orchestrator | 2025-05-25 00:47:19 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:19.248850 | orchestrator | 2025-05-25 00:47:19 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:19.256888 | orchestrator | 2025-05-25 00:47:19 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:19.258200 | orchestrator | 2025-05-25 00:47:19 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:47:19.258336 | orchestrator | 2025-05-25 00:47:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:22.306467 | orchestrator | 2025-05-25 00:47:22 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:22.306577 | orchestrator | 2025-05-25 00:47:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:22.306593 | orchestrator | 2025-05-25 00:47:22 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:22.306605 | orchestrator | 2025-05-25 00:47:22 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:22.309603 | orchestrator | 2025-05-25 00:47:22 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:22.310606 | orchestrator | 2025-05-25 00:47:22 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:47:22.310642 | orchestrator | 2025-05-25 00:47:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:25.364501 | orchestrator | 2025-05-25 00:47:25 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:25.368947 | orchestrator | 2025-05-25 00:47:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:25.372217 | orchestrator | 2025-05-25 00:47:25 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:25.373223 | orchestrator | 2025-05-25 00:47:25 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:25.374367 | orchestrator | 2025-05-25 00:47:25 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:25.375904 | orchestrator | 2025-05-25 00:47:25 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:47:25.376282 | orchestrator | 2025-05-25 00:47:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:28.423695 | orchestrator | 2025-05-25 00:47:28 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:28.426658 | orchestrator | 2025-05-25 00:47:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:28.428144 | orchestrator | 2025-05-25 00:47:28 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:28.429344 | orchestrator | 2025-05-25 00:47:28 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:28.429749 | orchestrator | 2025-05-25 00:47:28 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:28.429955 | orchestrator | 2025-05-25 00:47:28 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:47:28.430058 | orchestrator | 2025-05-25 00:47:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:31.470306 | orchestrator | 2025-05-25 00:47:31 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:31.474198 | orchestrator | 2025-05-25 00:47:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:31.475539 | orchestrator | 2025-05-25 00:47:31 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:31.476350 | orchestrator | 2025-05-25 00:47:31 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:31.477457 | orchestrator | 2025-05-25 00:47:31 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:31.479634 | orchestrator | 2025-05-25 00:47:31 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:47:31.479662 | orchestrator | 2025-05-25 00:47:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:34.534623 | orchestrator | 2025-05-25 00:47:34 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:34.535212 | orchestrator | 2025-05-25 00:47:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:34.536206 | orchestrator | 2025-05-25 00:47:34 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:34.538064 | orchestrator | 2025-05-25 00:47:34 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:34.544018 | orchestrator | 2025-05-25 00:47:34 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:34.544048 | orchestrator | 2025-05-25 00:47:34 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state STARTED 2025-05-25 00:47:34.544061 | orchestrator | 2025-05-25 00:47:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:37.592090 | orchestrator | 2025-05-25 00:47:37 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:37.592199 | orchestrator | 2025-05-25 00:47:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:37.592214 | orchestrator | 2025-05-25 00:47:37 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:37.592226 | orchestrator | 2025-05-25 00:47:37 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:37.592286 | orchestrator | 2025-05-25 00:47:37 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:37.592402 | orchestrator | 2025-05-25 00:47:37 | INFO  | Task 03d6debf-eecb-4403-a433-3cecb17eebd5 is in state SUCCESS 2025-05-25 00:47:37.592419 | orchestrator | 2025-05-25 00:47:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:40.637872 | orchestrator | 2025-05-25 00:47:40 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:40.637985 | orchestrator | 2025-05-25 00:47:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:40.642454 | orchestrator | 2025-05-25 00:47:40 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:40.644219 | orchestrator | 2025-05-25 00:47:40 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:40.644271 | orchestrator | 2025-05-25 00:47:40 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:40.644284 | orchestrator | 2025-05-25 00:47:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:43.687585 | orchestrator | 2025-05-25 00:47:43 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:43.687770 | orchestrator | 2025-05-25 00:47:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:43.688386 | orchestrator | 2025-05-25 00:47:43 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state STARTED 2025-05-25 00:47:43.688937 | orchestrator | 2025-05-25 00:47:43 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:43.689701 | orchestrator | 2025-05-25 00:47:43 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:43.689864 | orchestrator | 2025-05-25 00:47:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:46.724607 | orchestrator | 2025-05-25 00:47:46 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:46.724869 | orchestrator | 2025-05-25 00:47:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:46.724898 | orchestrator | 2025-05-25 00:47:46 | INFO  | Task 8ab95b17-60d3-410a-b06c-36a73edda384 is in state SUCCESS 2025-05-25 00:47:46.725767 | orchestrator | 2025-05-25 00:47:46.725802 | orchestrator | 2025-05-25 00:47:46.725815 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-25 00:47:46.725829 | orchestrator | 2025-05-25 00:47:46.725842 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-25 00:47:46.725855 | orchestrator | Sunday 25 May 2025 00:46:43 +0000 (0:00:00.363) 0:00:00.363 ************ 2025-05-25 00:47:46.725868 | orchestrator | ok: [testbed-manager] => { 2025-05-25 00:47:46.725881 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-25 00:47:46.725894 | orchestrator | } 2025-05-25 00:47:46.725906 | orchestrator | 2025-05-25 00:47:46.725917 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-25 00:47:46.725928 | orchestrator | Sunday 25 May 2025 00:46:43 +0000 (0:00:00.174) 0:00:00.537 ************ 2025-05-25 00:47:46.725939 | orchestrator | ok: [testbed-manager] 2025-05-25 00:47:46.725951 | orchestrator | 2025-05-25 00:47:46.725962 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-25 00:47:46.725973 | orchestrator | Sunday 25 May 2025 00:46:44 +0000 (0:00:01.093) 0:00:01.631 ************ 2025-05-25 00:47:46.725984 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-25 00:47:46.725996 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-25 00:47:46.726007 | orchestrator | 2025-05-25 00:47:46.726101 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-25 00:47:46.726115 | orchestrator | Sunday 25 May 2025 00:46:45 +0000 (0:00:00.837) 0:00:02.469 ************ 2025-05-25 00:47:46.726126 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.726136 | orchestrator | 2025-05-25 00:47:46.726147 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-25 00:47:46.726187 | orchestrator | Sunday 25 May 2025 00:46:47 +0000 (0:00:02.455) 0:00:04.924 ************ 2025-05-25 00:47:46.726198 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.726209 | orchestrator | 2025-05-25 00:47:46.726278 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-25 00:47:46.726302 | orchestrator | Sunday 25 May 2025 00:46:49 +0000 (0:00:01.441) 0:00:06.365 ************ 2025-05-25 00:47:46.726346 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-25 00:47:46.726359 | orchestrator | ok: [testbed-manager] 2025-05-25 00:47:46.726369 | orchestrator | 2025-05-25 00:47:46.726381 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-25 00:47:46.726391 | orchestrator | Sunday 25 May 2025 00:47:13 +0000 (0:00:24.830) 0:00:31.195 ************ 2025-05-25 00:47:46.726402 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.726413 | orchestrator | 2025-05-25 00:47:46.726424 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:47:46.726435 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:47:46.726447 | orchestrator | 2025-05-25 00:47:46.726458 | orchestrator | Sunday 25 May 2025 00:47:16 +0000 (0:00:02.522) 0:00:33.718 ************ 2025-05-25 00:47:46.726476 | orchestrator | =============================================================================== 2025-05-25 00:47:46.726487 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.83s 2025-05-25 00:47:46.726497 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.52s 2025-05-25 00:47:46.726508 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.46s 2025-05-25 00:47:46.726518 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.44s 2025-05-25 00:47:46.726529 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.09s 2025-05-25 00:47:46.726540 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.84s 2025-05-25 00:47:46.726550 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.17s 2025-05-25 00:47:46.726562 | orchestrator | 2025-05-25 00:47:46.726574 | orchestrator | 2025-05-25 00:47:46.726584 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-25 00:47:46.726595 | orchestrator | 2025-05-25 00:47:46.726606 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-25 00:47:46.726616 | orchestrator | Sunday 25 May 2025 00:46:43 +0000 (0:00:00.169) 0:00:00.169 ************ 2025-05-25 00:47:46.726627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-25 00:47:46.726640 | orchestrator | 2025-05-25 00:47:46.726650 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-25 00:47:46.726661 | orchestrator | Sunday 25 May 2025 00:46:43 +0000 (0:00:00.209) 0:00:00.378 ************ 2025-05-25 00:47:46.726671 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-25 00:47:46.726682 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-25 00:47:46.726693 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-25 00:47:46.726704 | orchestrator | 2025-05-25 00:47:46.726714 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-25 00:47:46.726725 | orchestrator | Sunday 25 May 2025 00:46:44 +0000 (0:00:01.313) 0:00:01.692 ************ 2025-05-25 00:47:46.726736 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.726746 | orchestrator | 2025-05-25 00:47:46.726757 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-25 00:47:46.726768 | orchestrator | Sunday 25 May 2025 00:46:46 +0000 (0:00:01.190) 0:00:02.882 ************ 2025-05-25 00:47:46.726778 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-25 00:47:46.726789 | orchestrator | ok: [testbed-manager] 2025-05-25 00:47:46.726800 | orchestrator | 2025-05-25 00:47:46.726824 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-25 00:47:46.726836 | orchestrator | Sunday 25 May 2025 00:47:27 +0000 (0:00:41.052) 0:00:43.934 ************ 2025-05-25 00:47:46.726854 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.726865 | orchestrator | 2025-05-25 00:47:46.726876 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-25 00:47:46.726887 | orchestrator | Sunday 25 May 2025 00:47:28 +0000 (0:00:01.386) 0:00:45.320 ************ 2025-05-25 00:47:46.726898 | orchestrator | ok: [testbed-manager] 2025-05-25 00:47:46.726908 | orchestrator | 2025-05-25 00:47:46.726919 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-25 00:47:46.726930 | orchestrator | Sunday 25 May 2025 00:47:29 +0000 (0:00:00.836) 0:00:46.157 ************ 2025-05-25 00:47:46.726941 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.726952 | orchestrator | 2025-05-25 00:47:46.726963 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-25 00:47:46.726973 | orchestrator | Sunday 25 May 2025 00:47:31 +0000 (0:00:02.626) 0:00:48.783 ************ 2025-05-25 00:47:46.726984 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.726995 | orchestrator | 2025-05-25 00:47:46.727005 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-25 00:47:46.727016 | orchestrator | Sunday 25 May 2025 00:47:33 +0000 (0:00:01.128) 0:00:49.912 ************ 2025-05-25 00:47:46.727027 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.727038 | orchestrator | 2025-05-25 00:47:46.727048 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-25 00:47:46.727059 | orchestrator | Sunday 25 May 2025 00:47:33 +0000 (0:00:00.705) 0:00:50.617 ************ 2025-05-25 00:47:46.727069 | orchestrator | ok: [testbed-manager] 2025-05-25 00:47:46.727080 | orchestrator | 2025-05-25 00:47:46.727091 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:47:46.727102 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:47:46.727113 | orchestrator | 2025-05-25 00:47:46.727123 | orchestrator | Sunday 25 May 2025 00:47:34 +0000 (0:00:00.481) 0:00:51.098 ************ 2025-05-25 00:47:46.727134 | orchestrator | =============================================================================== 2025-05-25 00:47:46.727145 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 41.05s 2025-05-25 00:47:46.727155 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.63s 2025-05-25 00:47:46.727166 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.39s 2025-05-25 00:47:46.727177 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.31s 2025-05-25 00:47:46.727188 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.19s 2025-05-25 00:47:46.727198 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.13s 2025-05-25 00:47:46.727209 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.84s 2025-05-25 00:47:46.727268 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.71s 2025-05-25 00:47:46.727281 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.48s 2025-05-25 00:47:46.727292 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.21s 2025-05-25 00:47:46.727303 | orchestrator | 2025-05-25 00:47:46.727314 | orchestrator | 2025-05-25 00:47:46.727325 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:47:46.727335 | orchestrator | 2025-05-25 00:47:46.727346 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:47:46.727357 | orchestrator | Sunday 25 May 2025 00:46:42 +0000 (0:00:00.329) 0:00:00.329 ************ 2025-05-25 00:47:46.727368 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-25 00:47:46.727378 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-25 00:47:46.727389 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-25 00:47:46.727400 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-25 00:47:46.727418 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-25 00:47:46.727428 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-25 00:47:46.727439 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-25 00:47:46.727449 | orchestrator | 2025-05-25 00:47:46.727460 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-25 00:47:46.727471 | orchestrator | 2025-05-25 00:47:46.727482 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-25 00:47:46.727492 | orchestrator | Sunday 25 May 2025 00:46:44 +0000 (0:00:01.793) 0:00:02.122 ************ 2025-05-25 00:47:46.727517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:47:46.727531 | orchestrator | 2025-05-25 00:47:46.727541 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-25 00:47:46.727552 | orchestrator | Sunday 25 May 2025 00:46:45 +0000 (0:00:01.405) 0:00:03.528 ************ 2025-05-25 00:47:46.727563 | orchestrator | ok: [testbed-manager] 2025-05-25 00:47:46.727574 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:47:46.727585 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:47:46.727595 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:47:46.727606 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:47:46.727617 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:47:46.727627 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:47:46.727638 | orchestrator | 2025-05-25 00:47:46.727649 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-25 00:47:46.727667 | orchestrator | Sunday 25 May 2025 00:46:47 +0000 (0:00:02.107) 0:00:05.635 ************ 2025-05-25 00:47:46.727678 | orchestrator | ok: [testbed-manager] 2025-05-25 00:47:46.727689 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:47:46.727699 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:47:46.727710 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:47:46.727720 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:47:46.727731 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:47:46.727742 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:47:46.727752 | orchestrator | 2025-05-25 00:47:46.727763 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-25 00:47:46.727774 | orchestrator | Sunday 25 May 2025 00:46:50 +0000 (0:00:02.836) 0:00:08.472 ************ 2025-05-25 00:47:46.727785 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.727796 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:47:46.727806 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:47:46.727817 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:47:46.727828 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:47:46.727838 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:47:46.727849 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:47:46.727860 | orchestrator | 2025-05-25 00:47:46.727870 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-25 00:47:46.727881 | orchestrator | Sunday 25 May 2025 00:46:52 +0000 (0:00:02.418) 0:00:10.890 ************ 2025-05-25 00:47:46.727892 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.727902 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:47:46.727913 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:47:46.727924 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:47:46.727934 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:47:46.727945 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:47:46.727956 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:47:46.727967 | orchestrator | 2025-05-25 00:47:46.727977 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-25 00:47:46.727988 | orchestrator | Sunday 25 May 2025 00:47:02 +0000 (0:00:09.822) 0:00:20.713 ************ 2025-05-25 00:47:46.727999 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:47:46.728017 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:47:46.728028 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:47:46.728038 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:47:46.728049 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:47:46.728059 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:47:46.728070 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.728081 | orchestrator | 2025-05-25 00:47:46.728092 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-25 00:47:46.728102 | orchestrator | Sunday 25 May 2025 00:47:21 +0000 (0:00:18.319) 0:00:39.032 ************ 2025-05-25 00:47:46.728114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:47:46.728127 | orchestrator | 2025-05-25 00:47:46.728138 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-25 00:47:46.728149 | orchestrator | Sunday 25 May 2025 00:47:22 +0000 (0:00:01.554) 0:00:40.587 ************ 2025-05-25 00:47:46.728160 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-25 00:47:46.728180 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-25 00:47:46.728192 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-25 00:47:46.728203 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-25 00:47:46.728213 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-25 00:47:46.728241 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-25 00:47:46.728252 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-25 00:47:46.728263 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-25 00:47:46.728274 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-25 00:47:46.728285 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-25 00:47:46.728296 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-25 00:47:46.728306 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-25 00:47:46.728317 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-25 00:47:46.728328 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-25 00:47:46.728338 | orchestrator | 2025-05-25 00:47:46.728349 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-25 00:47:46.728360 | orchestrator | Sunday 25 May 2025 00:47:29 +0000 (0:00:06.638) 0:00:47.226 ************ 2025-05-25 00:47:46.728371 | orchestrator | ok: [testbed-manager] 2025-05-25 00:47:46.728382 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:47:46.728393 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:47:46.728404 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:47:46.728415 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:47:46.728425 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:47:46.728436 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:47:46.728446 | orchestrator | 2025-05-25 00:47:46.728457 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-25 00:47:46.728468 | orchestrator | Sunday 25 May 2025 00:47:31 +0000 (0:00:01.740) 0:00:48.966 ************ 2025-05-25 00:47:46.728479 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.728490 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:47:46.728501 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:47:46.728512 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:47:46.728522 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:47:46.728533 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:47:46.728544 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:47:46.728554 | orchestrator | 2025-05-25 00:47:46.728565 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-25 00:47:46.728576 | orchestrator | Sunday 25 May 2025 00:47:33 +0000 (0:00:02.329) 0:00:51.295 ************ 2025-05-25 00:47:46.728587 | orchestrator | ok: [testbed-manager] 2025-05-25 00:47:46.728605 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:47:46.728621 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:47:46.728640 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:47:46.728672 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:47:46.728699 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:47:46.728717 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:47:46.728734 | orchestrator | 2025-05-25 00:47:46.728761 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-25 00:47:46.728781 | orchestrator | Sunday 25 May 2025 00:47:35 +0000 (0:00:02.351) 0:00:53.647 ************ 2025-05-25 00:47:46.728800 | orchestrator | ok: [testbed-manager] 2025-05-25 00:47:46.728819 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:47:46.728837 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:47:46.728855 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:47:46.728872 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:47:46.728882 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:47:46.728893 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:47:46.728905 | orchestrator | 2025-05-25 00:47:46.728924 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-25 00:47:46.728942 | orchestrator | Sunday 25 May 2025 00:47:37 +0000 (0:00:02.177) 0:00:55.825 ************ 2025-05-25 00:47:46.728959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-25 00:47:46.728979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:47:46.728996 | orchestrator | 2025-05-25 00:47:46.729015 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-25 00:47:46.729034 | orchestrator | Sunday 25 May 2025 00:47:39 +0000 (0:00:01.623) 0:00:57.448 ************ 2025-05-25 00:47:46.729054 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.729072 | orchestrator | 2025-05-25 00:47:46.729088 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-25 00:47:46.729099 | orchestrator | Sunday 25 May 2025 00:47:41 +0000 (0:00:02.215) 0:00:59.664 ************ 2025-05-25 00:47:46.729110 | orchestrator | changed: [testbed-manager] 2025-05-25 00:47:46.729121 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:47:46.729131 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:47:46.729142 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:47:46.729152 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:47:46.729163 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:47:46.729174 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:47:46.729184 | orchestrator | 2025-05-25 00:47:46.729195 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:47:46.729206 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:47:46.729218 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:47:46.729257 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:47:46.729269 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:47:46.729281 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:47:46.729292 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:47:46.729307 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:47:46.729339 | orchestrator | 2025-05-25 00:47:46.729357 | orchestrator | Sunday 25 May 2025 00:47:44 +0000 (0:00:03.021) 0:01:02.685 ************ 2025-05-25 00:47:46.729374 | orchestrator | =============================================================================== 2025-05-25 00:47:46.729392 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 18.32s 2025-05-25 00:47:46.729411 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.82s 2025-05-25 00:47:46.729429 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.64s 2025-05-25 00:47:46.729447 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.02s 2025-05-25 00:47:46.729467 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.84s 2025-05-25 00:47:46.729485 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.42s 2025-05-25 00:47:46.729504 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.35s 2025-05-25 00:47:46.729517 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.33s 2025-05-25 00:47:46.729527 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.22s 2025-05-25 00:47:46.729538 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.18s 2025-05-25 00:47:46.729548 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.11s 2025-05-25 00:47:46.729559 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.79s 2025-05-25 00:47:46.729570 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.74s 2025-05-25 00:47:46.729580 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.62s 2025-05-25 00:47:46.729601 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.55s 2025-05-25 00:47:46.729612 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.41s 2025-05-25 00:47:46.732465 | orchestrator | 2025-05-25 00:47:46 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:46.732677 | orchestrator | 2025-05-25 00:47:46 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:46.732704 | orchestrator | 2025-05-25 00:47:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:49.779356 | orchestrator | 2025-05-25 00:47:49 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:49.779469 | orchestrator | 2025-05-25 00:47:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:49.779485 | orchestrator | 2025-05-25 00:47:49 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:49.779498 | orchestrator | 2025-05-25 00:47:49 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:49.779582 | orchestrator | 2025-05-25 00:47:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:52.819495 | orchestrator | 2025-05-25 00:47:52 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:52.819601 | orchestrator | 2025-05-25 00:47:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:52.819685 | orchestrator | 2025-05-25 00:47:52 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:52.819909 | orchestrator | 2025-05-25 00:47:52 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:52.819971 | orchestrator | 2025-05-25 00:47:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:55.863971 | orchestrator | 2025-05-25 00:47:55 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:55.864089 | orchestrator | 2025-05-25 00:47:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:55.864154 | orchestrator | 2025-05-25 00:47:55 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:55.866261 | orchestrator | 2025-05-25 00:47:55 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:55.866325 | orchestrator | 2025-05-25 00:47:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:47:58.921053 | orchestrator | 2025-05-25 00:47:58 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:47:58.921164 | orchestrator | 2025-05-25 00:47:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:47:58.921179 | orchestrator | 2025-05-25 00:47:58 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state STARTED 2025-05-25 00:47:58.921191 | orchestrator | 2025-05-25 00:47:58 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:47:58.921202 | orchestrator | 2025-05-25 00:47:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:01.974098 | orchestrator | 2025-05-25 00:48:01 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:01.975945 | orchestrator | 2025-05-25 00:48:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:01.977411 | orchestrator | 2025-05-25 00:48:01 | INFO  | Task 31c6659a-9cdd-4aa2-9bd2-46c186e999ff is in state SUCCESS 2025-05-25 00:48:01.978816 | orchestrator | 2025-05-25 00:48:01 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:01.978915 | orchestrator | 2025-05-25 00:48:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:05.036530 | orchestrator | 2025-05-25 00:48:05 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:05.036665 | orchestrator | 2025-05-25 00:48:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:05.036688 | orchestrator | 2025-05-25 00:48:05 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:05.036702 | orchestrator | 2025-05-25 00:48:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:08.071942 | orchestrator | 2025-05-25 00:48:08 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:08.074082 | orchestrator | 2025-05-25 00:48:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:08.075373 | orchestrator | 2025-05-25 00:48:08 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:08.075383 | orchestrator | 2025-05-25 00:48:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:11.109679 | orchestrator | 2025-05-25 00:48:11 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:11.110318 | orchestrator | 2025-05-25 00:48:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:11.113526 | orchestrator | 2025-05-25 00:48:11 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:11.113549 | orchestrator | 2025-05-25 00:48:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:14.159265 | orchestrator | 2025-05-25 00:48:14 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:14.159828 | orchestrator | 2025-05-25 00:48:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:14.162076 | orchestrator | 2025-05-25 00:48:14 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:14.162140 | orchestrator | 2025-05-25 00:48:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:17.204414 | orchestrator | 2025-05-25 00:48:17 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:17.205066 | orchestrator | 2025-05-25 00:48:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:17.206896 | orchestrator | 2025-05-25 00:48:17 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:17.207391 | orchestrator | 2025-05-25 00:48:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:20.247511 | orchestrator | 2025-05-25 00:48:20 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:20.247619 | orchestrator | 2025-05-25 00:48:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:20.256551 | orchestrator | 2025-05-25 00:48:20 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:20.256609 | orchestrator | 2025-05-25 00:48:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:23.290133 | orchestrator | 2025-05-25 00:48:23 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:23.290640 | orchestrator | 2025-05-25 00:48:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:23.292095 | orchestrator | 2025-05-25 00:48:23 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:23.292121 | orchestrator | 2025-05-25 00:48:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:26.344277 | orchestrator | 2025-05-25 00:48:26 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:26.344355 | orchestrator | 2025-05-25 00:48:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:26.346389 | orchestrator | 2025-05-25 00:48:26 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:26.346432 | orchestrator | 2025-05-25 00:48:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:29.388076 | orchestrator | 2025-05-25 00:48:29 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:29.388330 | orchestrator | 2025-05-25 00:48:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:29.390990 | orchestrator | 2025-05-25 00:48:29 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:29.391036 | orchestrator | 2025-05-25 00:48:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:32.438889 | orchestrator | 2025-05-25 00:48:32 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:32.439194 | orchestrator | 2025-05-25 00:48:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:32.440032 | orchestrator | 2025-05-25 00:48:32 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:32.440094 | orchestrator | 2025-05-25 00:48:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:35.480085 | orchestrator | 2025-05-25 00:48:35 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:35.480848 | orchestrator | 2025-05-25 00:48:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:35.481466 | orchestrator | 2025-05-25 00:48:35 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:35.482183 | orchestrator | 2025-05-25 00:48:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:38.525019 | orchestrator | 2025-05-25 00:48:38 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:38.527528 | orchestrator | 2025-05-25 00:48:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:38.527603 | orchestrator | 2025-05-25 00:48:38 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:38.529129 | orchestrator | 2025-05-25 00:48:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:41.576165 | orchestrator | 2025-05-25 00:48:41 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:41.576814 | orchestrator | 2025-05-25 00:48:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:41.579043 | orchestrator | 2025-05-25 00:48:41 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:41.579108 | orchestrator | 2025-05-25 00:48:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:44.618297 | orchestrator | 2025-05-25 00:48:44 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:44.618792 | orchestrator | 2025-05-25 00:48:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:44.620748 | orchestrator | 2025-05-25 00:48:44 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:44.620777 | orchestrator | 2025-05-25 00:48:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:47.675348 | orchestrator | 2025-05-25 00:48:47 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:47.677658 | orchestrator | 2025-05-25 00:48:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:47.678312 | orchestrator | 2025-05-25 00:48:47 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:47.678422 | orchestrator | 2025-05-25 00:48:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:50.719004 | orchestrator | 2025-05-25 00:48:50 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:50.719367 | orchestrator | 2025-05-25 00:48:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:50.719473 | orchestrator | 2025-05-25 00:48:50 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:50.719490 | orchestrator | 2025-05-25 00:48:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:53.768831 | orchestrator | 2025-05-25 00:48:53 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:53.769895 | orchestrator | 2025-05-25 00:48:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:53.770466 | orchestrator | 2025-05-25 00:48:53 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:53.770584 | orchestrator | 2025-05-25 00:48:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:56.820747 | orchestrator | 2025-05-25 00:48:56 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:56.821710 | orchestrator | 2025-05-25 00:48:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:56.822731 | orchestrator | 2025-05-25 00:48:56 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state STARTED 2025-05-25 00:48:56.822777 | orchestrator | 2025-05-25 00:48:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:48:59.868596 | orchestrator | 2025-05-25 00:48:59 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:48:59.868809 | orchestrator | 2025-05-25 00:48:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:48:59.873268 | orchestrator | 2025-05-25 00:48:59.873350 | orchestrator | 2025-05-25 00:48:59.873366 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-25 00:48:59.873379 | orchestrator | 2025-05-25 00:48:59.873391 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-25 00:48:59.873403 | orchestrator | Sunday 25 May 2025 00:46:57 +0000 (0:00:00.242) 0:00:00.242 ************ 2025-05-25 00:48:59.873414 | orchestrator | ok: [testbed-manager] 2025-05-25 00:48:59.873426 | orchestrator | 2025-05-25 00:48:59.873437 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-25 00:48:59.873448 | orchestrator | Sunday 25 May 2025 00:46:58 +0000 (0:00:00.843) 0:00:01.086 ************ 2025-05-25 00:48:59.873459 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-25 00:48:59.873470 | orchestrator | 2025-05-25 00:48:59.873481 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-25 00:48:59.873492 | orchestrator | Sunday 25 May 2025 00:46:59 +0000 (0:00:00.567) 0:00:01.653 ************ 2025-05-25 00:48:59.873503 | orchestrator | changed: [testbed-manager] 2025-05-25 00:48:59.873514 | orchestrator | 2025-05-25 00:48:59.873525 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-25 00:48:59.873536 | orchestrator | Sunday 25 May 2025 00:47:00 +0000 (0:00:01.311) 0:00:02.965 ************ 2025-05-25 00:48:59.873547 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-25 00:48:59.873558 | orchestrator | ok: [testbed-manager] 2025-05-25 00:48:59.873569 | orchestrator | 2025-05-25 00:48:59.873580 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-25 00:48:59.873591 | orchestrator | Sunday 25 May 2025 00:47:56 +0000 (0:00:56.466) 0:00:59.431 ************ 2025-05-25 00:48:59.873602 | orchestrator | changed: [testbed-manager] 2025-05-25 00:48:59.873613 | orchestrator | 2025-05-25 00:48:59.873623 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:48:59.873635 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:48:59.873649 | orchestrator | 2025-05-25 00:48:59.873660 | orchestrator | Sunday 25 May 2025 00:48:00 +0000 (0:00:03.516) 0:01:02.947 ************ 2025-05-25 00:48:59.873671 | orchestrator | =============================================================================== 2025-05-25 00:48:59.873682 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 56.47s 2025-05-25 00:48:59.873692 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.52s 2025-05-25 00:48:59.873703 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.31s 2025-05-25 00:48:59.873714 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.84s 2025-05-25 00:48:59.873725 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.57s 2025-05-25 00:48:59.873736 | orchestrator | 2025-05-25 00:48:59.873747 | orchestrator | 2025-05-25 00:48:59.873758 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-25 00:48:59.873769 | orchestrator | 2025-05-25 00:48:59.873780 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-25 00:48:59.873793 | orchestrator | Sunday 25 May 2025 00:46:38 +0000 (0:00:00.396) 0:00:00.396 ************ 2025-05-25 00:48:59.873805 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:48:59.873819 | orchestrator | 2025-05-25 00:48:59.873831 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-25 00:48:59.873866 | orchestrator | Sunday 25 May 2025 00:46:40 +0000 (0:00:01.281) 0:00:01.678 ************ 2025-05-25 00:48:59.873879 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 00:48:59.873898 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 00:48:59.873911 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 00:48:59.873923 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 00:48:59.873936 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 00:48:59.873949 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 00:48:59.873961 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 00:48:59.873974 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 00:48:59.873987 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 00:48:59.873999 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 00:48:59.874011 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-25 00:48:59.874110 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 00:48:59.874123 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 00:48:59.874136 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 00:48:59.874147 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-25 00:48:59.874158 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 00:48:59.874169 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 00:48:59.874219 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 00:48:59.874232 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 00:48:59.874243 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 00:48:59.874254 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-25 00:48:59.874265 | orchestrator | 2025-05-25 00:48:59.874276 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-25 00:48:59.874287 | orchestrator | Sunday 25 May 2025 00:46:43 +0000 (0:00:03.396) 0:00:05.074 ************ 2025-05-25 00:48:59.874298 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:48:59.874310 | orchestrator | 2025-05-25 00:48:59.874321 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-25 00:48:59.874332 | orchestrator | Sunday 25 May 2025 00:46:45 +0000 (0:00:01.762) 0:00:06.837 ************ 2025-05-25 00:48:59.874347 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.874363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.874386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.874403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.874415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.874427 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.874447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.874459 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874551 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874581 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874628 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.874673 | orchestrator | 2025-05-25 00:48:59.874685 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-25 00:48:59.874696 | orchestrator | Sunday 25 May 2025 00:46:49 +0000 (0:00:04.127) 0:00:10.964 ************ 2025-05-25 00:48:59.874714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.874732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.874750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.874762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.874774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.874789 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:48:59.874801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.874812 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.874838 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.874850 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.874868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.874879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.874891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.874902 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:48:59.874914 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:48:59.874925 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:48:59.874941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.874953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.874964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.874975 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:48:59.874992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.875010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875033 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:48:59.875044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.875060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875083 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:48:59.875094 | orchestrator | 2025-05-25 00:48:59.875105 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-25 00:48:59.875116 | orchestrator | Sunday 25 May 2025 00:46:51 +0000 (0:00:01.694) 0:00:12.658 ************ 2025-05-25 00:48:59.875128 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.875145 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875162 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.875201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.875240 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:48:59.875252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875286 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:48:59.875298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.875309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875332 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:48:59.875343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.875359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875381 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:48:59.875392 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:48:59.875404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.875427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875450 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:48:59.875461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-25 00:48:59.875473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.875495 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:48:59.875506 | orchestrator | 2025-05-25 00:48:59.875517 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-25 00:48:59.875533 | orchestrator | Sunday 25 May 2025 00:46:54 +0000 (0:00:03.019) 0:00:15.677 ************ 2025-05-25 00:48:59.875544 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:48:59.875555 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:48:59.875565 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:48:59.875576 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:48:59.875587 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:48:59.875598 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:48:59.875608 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:48:59.875619 | orchestrator | 2025-05-25 00:48:59.875636 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-25 00:48:59.875647 | orchestrator | Sunday 25 May 2025 00:46:55 +0000 (0:00:00.996) 0:00:16.674 ************ 2025-05-25 00:48:59.875658 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:48:59.875668 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:48:59.875679 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:48:59.875690 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:48:59.875701 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:48:59.875711 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:48:59.875722 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:48:59.875732 | orchestrator | 2025-05-25 00:48:59.875743 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-25 00:48:59.875754 | orchestrator | Sunday 25 May 2025 00:46:56 +0000 (0:00:00.910) 0:00:17.585 ************ 2025-05-25 00:48:59.875765 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:48:59.875776 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:48:59.875787 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:48:59.875798 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:48:59.875908 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:48:59.875921 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:48:59.875931 | orchestrator | changed: [testbed-manager] 2025-05-25 00:48:59.875942 | orchestrator | 2025-05-25 00:48:59.875953 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-25 00:48:59.875964 | orchestrator | Sunday 25 May 2025 00:47:34 +0000 (0:00:38.157) 0:00:55.742 ************ 2025-05-25 00:48:59.875975 | orchestrator | ok: [testbed-manager] 2025-05-25 00:48:59.875993 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:48:59.876004 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:48:59.876015 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:48:59.876026 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:48:59.876036 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:48:59.876047 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:48:59.876058 | orchestrator | 2025-05-25 00:48:59.876069 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-25 00:48:59.876080 | orchestrator | Sunday 25 May 2025 00:47:37 +0000 (0:00:02.761) 0:00:58.504 ************ 2025-05-25 00:48:59.876090 | orchestrator | ok: [testbed-manager] 2025-05-25 00:48:59.876101 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:48:59.876112 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:48:59.876122 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:48:59.876133 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:48:59.876143 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:48:59.876153 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:48:59.876164 | orchestrator | 2025-05-25 00:48:59.876175 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-25 00:48:59.876220 | orchestrator | Sunday 25 May 2025 00:47:38 +0000 (0:00:01.001) 0:00:59.506 ************ 2025-05-25 00:48:59.876232 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:48:59.876243 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:48:59.876254 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:48:59.876264 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:48:59.876275 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:48:59.876285 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:48:59.876296 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:48:59.876306 | orchestrator | 2025-05-25 00:48:59.876317 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-25 00:48:59.876328 | orchestrator | Sunday 25 May 2025 00:47:38 +0000 (0:00:00.931) 0:01:00.437 ************ 2025-05-25 00:48:59.876338 | orchestrator | skipping: [testbed-manager] 2025-05-25 00:48:59.876349 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:48:59.876360 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:48:59.876370 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:48:59.876381 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:48:59.876392 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:48:59.876415 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:48:59.876426 | orchestrator | 2025-05-25 00:48:59.876436 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-25 00:48:59.876447 | orchestrator | Sunday 25 May 2025 00:47:39 +0000 (0:00:00.691) 0:01:01.129 ************ 2025-05-25 00:48:59.876458 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.876470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.876482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.876493 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.876523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.876535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876553 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.876570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.876598 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876628 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876657 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876707 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.876755 | orchestrator | 2025-05-25 00:48:59.876766 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-25 00:48:59.876783 | orchestrator | Sunday 25 May 2025 00:47:44 +0000 (0:00:04.540) 0:01:05.669 ************ 2025-05-25 00:48:59.876795 | orchestrator | [WARNING]: Skipped 2025-05-25 00:48:59.876807 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-25 00:48:59.876818 | orchestrator | to this access issue: 2025-05-25 00:48:59.876829 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-25 00:48:59.876840 | orchestrator | directory 2025-05-25 00:48:59.876851 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 00:48:59.876861 | orchestrator | 2025-05-25 00:48:59.876872 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-25 00:48:59.876883 | orchestrator | Sunday 25 May 2025 00:47:45 +0000 (0:00:00.790) 0:01:06.460 ************ 2025-05-25 00:48:59.876894 | orchestrator | [WARNING]: Skipped 2025-05-25 00:48:59.876905 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-25 00:48:59.876915 | orchestrator | to this access issue: 2025-05-25 00:48:59.876926 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-25 00:48:59.876937 | orchestrator | directory 2025-05-25 00:48:59.876948 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 00:48:59.876959 | orchestrator | 2025-05-25 00:48:59.876970 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-25 00:48:59.876980 | orchestrator | Sunday 25 May 2025 00:47:45 +0000 (0:00:00.806) 0:01:07.267 ************ 2025-05-25 00:48:59.876991 | orchestrator | [WARNING]: Skipped 2025-05-25 00:48:59.877002 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-25 00:48:59.877013 | orchestrator | to this access issue: 2025-05-25 00:48:59.877024 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-25 00:48:59.877034 | orchestrator | directory 2025-05-25 00:48:59.877045 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 00:48:59.877056 | orchestrator | 2025-05-25 00:48:59.877067 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-25 00:48:59.877079 | orchestrator | Sunday 25 May 2025 00:47:46 +0000 (0:00:00.477) 0:01:07.745 ************ 2025-05-25 00:48:59.877089 | orchestrator | [WARNING]: Skipped 2025-05-25 00:48:59.877100 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-25 00:48:59.877166 | orchestrator | to this access issue: 2025-05-25 00:48:59.877177 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-25 00:48:59.877205 | orchestrator | directory 2025-05-25 00:48:59.877217 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 00:48:59.877228 | orchestrator | 2025-05-25 00:48:59.877239 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-25 00:48:59.877249 | orchestrator | Sunday 25 May 2025 00:47:46 +0000 (0:00:00.501) 0:01:08.247 ************ 2025-05-25 00:48:59.877260 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:48:59.877271 | orchestrator | changed: [testbed-manager] 2025-05-25 00:48:59.877282 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:48:59.877293 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:48:59.877309 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:48:59.877320 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:48:59.877331 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:48:59.877342 | orchestrator | 2025-05-25 00:48:59.877353 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-25 00:48:59.877363 | orchestrator | Sunday 25 May 2025 00:47:50 +0000 (0:00:04.185) 0:01:12.432 ************ 2025-05-25 00:48:59.877374 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 00:48:59.877385 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 00:48:59.877396 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 00:48:59.877414 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 00:48:59.877427 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 00:48:59.877444 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 00:48:59.877463 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-25 00:48:59.877482 | orchestrator | 2025-05-25 00:48:59.877500 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-25 00:48:59.877519 | orchestrator | Sunday 25 May 2025 00:47:53 +0000 (0:00:02.147) 0:01:14.579 ************ 2025-05-25 00:48:59.877537 | orchestrator | changed: [testbed-manager] 2025-05-25 00:48:59.877555 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:48:59.877573 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:48:59.877592 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:48:59.877609 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:48:59.877638 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:48:59.877657 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:48:59.877674 | orchestrator | 2025-05-25 00:48:59.877692 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-25 00:48:59.877710 | orchestrator | Sunday 25 May 2025 00:47:55 +0000 (0:00:02.234) 0:01:16.814 ************ 2025-05-25 00:48:59.877729 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.877748 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.877769 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.877788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.877812 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.877833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.877853 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.877880 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.877891 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.877903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.877915 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.877926 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.877950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.877961 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.877979 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.877990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.878002 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878050 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.878065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:48:59.878084 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878100 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878111 | orchestrator | 2025-05-25 00:48:59.878123 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-25 00:48:59.878134 | orchestrator | Sunday 25 May 2025 00:47:57 +0000 (0:00:02.155) 0:01:18.969 ************ 2025-05-25 00:48:59.878145 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 00:48:59.878156 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 00:48:59.878167 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 00:48:59.878178 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 00:48:59.878356 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 00:48:59.878396 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 00:48:59.878407 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-25 00:48:59.878418 | orchestrator | 2025-05-25 00:48:59.878428 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-25 00:48:59.878451 | orchestrator | Sunday 25 May 2025 00:48:00 +0000 (0:00:02.608) 0:01:21.577 ************ 2025-05-25 00:48:59.878459 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 00:48:59.878467 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 00:48:59.878475 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 00:48:59.878483 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 00:48:59.878491 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 00:48:59.878499 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 00:48:59.878507 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-25 00:48:59.878515 | orchestrator | 2025-05-25 00:48:59.878523 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-25 00:48:59.878530 | orchestrator | Sunday 25 May 2025 00:48:03 +0000 (0:00:03.464) 0:01:25.042 ************ 2025-05-25 00:48:59.878540 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.878582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.878591 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.878612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.878625 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.878635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878656 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.878678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878686 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-25 00:48:59.878709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878725 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878820 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878829 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878913 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:48:59.878921 | orchestrator | 2025-05-25 00:48:59.878929 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-25 00:48:59.878937 | orchestrator | Sunday 25 May 2025 00:48:07 +0000 (0:00:03.615) 0:01:28.658 ************ 2025-05-25 00:48:59.878946 | orchestrator | changed: [testbed-manager] 2025-05-25 00:48:59.878960 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:48:59.878968 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:48:59.878976 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:48:59.878984 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:48:59.878992 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:48:59.879000 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:48:59.879008 | orchestrator | 2025-05-25 00:48:59.879016 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-25 00:48:59.879024 | orchestrator | Sunday 25 May 2025 00:48:08 +0000 (0:00:01.680) 0:01:30.338 ************ 2025-05-25 00:48:59.879032 | orchestrator | changed: [testbed-manager] 2025-05-25 00:48:59.879040 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:48:59.879054 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:48:59.879062 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:48:59.879070 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:48:59.879077 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:48:59.879085 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:48:59.879093 | orchestrator | 2025-05-25 00:48:59.879101 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 00:48:59.879109 | orchestrator | Sunday 25 May 2025 00:48:10 +0000 (0:00:01.523) 0:01:31.861 ************ 2025-05-25 00:48:59.879117 | orchestrator | 2025-05-25 00:48:59.879125 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 00:48:59.879133 | orchestrator | Sunday 25 May 2025 00:48:10 +0000 (0:00:00.058) 0:01:31.919 ************ 2025-05-25 00:48:59.879141 | orchestrator | 2025-05-25 00:48:59.879149 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 00:48:59.879156 | orchestrator | Sunday 25 May 2025 00:48:10 +0000 (0:00:00.054) 0:01:31.974 ************ 2025-05-25 00:48:59.879164 | orchestrator | 2025-05-25 00:48:59.879172 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 00:48:59.879214 | orchestrator | Sunday 25 May 2025 00:48:10 +0000 (0:00:00.052) 0:01:32.026 ************ 2025-05-25 00:48:59.879228 | orchestrator | 2025-05-25 00:48:59.879236 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 00:48:59.879244 | orchestrator | Sunday 25 May 2025 00:48:10 +0000 (0:00:00.231) 0:01:32.258 ************ 2025-05-25 00:48:59.879252 | orchestrator | 2025-05-25 00:48:59.879290 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 00:48:59.879299 | orchestrator | Sunday 25 May 2025 00:48:10 +0000 (0:00:00.059) 0:01:32.317 ************ 2025-05-25 00:48:59.879307 | orchestrator | 2025-05-25 00:48:59.879315 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-25 00:48:59.879323 | orchestrator | Sunday 25 May 2025 00:48:10 +0000 (0:00:00.066) 0:01:32.383 ************ 2025-05-25 00:48:59.879330 | orchestrator | 2025-05-25 00:48:59.879338 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-25 00:48:59.879346 | orchestrator | Sunday 25 May 2025 00:48:11 +0000 (0:00:00.070) 0:01:32.454 ************ 2025-05-25 00:48:59.879354 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:48:59.879362 | orchestrator | changed: [testbed-manager] 2025-05-25 00:48:59.879370 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:48:59.879378 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:48:59.879386 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:48:59.879394 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:48:59.879402 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:48:59.879410 | orchestrator | 2025-05-25 00:48:59.879418 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-25 00:48:59.879426 | orchestrator | Sunday 25 May 2025 00:48:19 +0000 (0:00:08.070) 0:01:40.525 ************ 2025-05-25 00:48:59.879434 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:48:59.879442 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:48:59.879449 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:48:59.879460 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:48:59.879473 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:48:59.879482 | orchestrator | changed: [testbed-manager] 2025-05-25 00:48:59.879489 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:48:59.879497 | orchestrator | 2025-05-25 00:48:59.879505 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-25 00:48:59.879518 | orchestrator | Sunday 25 May 2025 00:48:47 +0000 (0:00:28.368) 0:02:08.893 ************ 2025-05-25 00:48:59.879526 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:48:59.879535 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:48:59.879543 | orchestrator | ok: [testbed-manager] 2025-05-25 00:48:59.879550 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:48:59.879558 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:48:59.879572 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:48:59.879580 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:48:59.879588 | orchestrator | 2025-05-25 00:48:59.879596 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-25 00:48:59.879603 | orchestrator | Sunday 25 May 2025 00:48:49 +0000 (0:00:02.425) 0:02:11.318 ************ 2025-05-25 00:48:59.879611 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:48:59.879619 | orchestrator | changed: [testbed-manager] 2025-05-25 00:48:59.879627 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:48:59.879645 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:48:59.879653 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:48:59.879661 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:48:59.879669 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:48:59.879676 | orchestrator | 2025-05-25 00:48:59.879684 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:48:59.879694 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 00:48:59.879703 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 00:48:59.879735 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 00:48:59.879750 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 00:48:59.879759 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 00:48:59.879767 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 00:48:59.879775 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-25 00:48:59.879783 | orchestrator | 2025-05-25 00:48:59.879791 | orchestrator | 2025-05-25 00:48:59.879799 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:48:59.879807 | orchestrator | Sunday 25 May 2025 00:48:59 +0000 (0:00:09.402) 0:02:20.720 ************ 2025-05-25 00:48:59.879815 | orchestrator | =============================================================================== 2025-05-25 00:48:59.879823 | orchestrator | common : Ensure fluentd image is present for label check --------------- 38.16s 2025-05-25 00:48:59.879831 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 28.37s 2025-05-25 00:48:59.879839 | orchestrator | common : Restart cron container ----------------------------------------- 9.40s 2025-05-25 00:48:59.879846 | orchestrator | common : Restart fluentd container -------------------------------------- 8.07s 2025-05-25 00:48:59.879854 | orchestrator | common : Copying over config.json files for services -------------------- 4.54s 2025-05-25 00:48:59.879862 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 4.19s 2025-05-25 00:48:59.879870 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.13s 2025-05-25 00:48:59.879878 | orchestrator | common : Check common containers ---------------------------------------- 3.62s 2025-05-25 00:48:59.879886 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.46s 2025-05-25 00:48:59.879893 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.40s 2025-05-25 00:48:59.879901 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.02s 2025-05-25 00:48:59.879909 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.76s 2025-05-25 00:48:59.879917 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.61s 2025-05-25 00:48:59.879930 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.43s 2025-05-25 00:48:59.879939 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.23s 2025-05-25 00:48:59.879946 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.16s 2025-05-25 00:48:59.879954 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.15s 2025-05-25 00:48:59.879962 | orchestrator | common : include_tasks -------------------------------------------------- 1.76s 2025-05-25 00:48:59.879970 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.69s 2025-05-25 00:48:59.879978 | orchestrator | common : Creating log volume -------------------------------------------- 1.68s 2025-05-25 00:48:59.879986 | orchestrator | 2025-05-25 00:48:59 | INFO  | Task 0f26779a-6d96-4951-b510-5e3dcb6291c2 is in state SUCCESS 2025-05-25 00:48:59.879994 | orchestrator | 2025-05-25 00:48:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:02.913443 | orchestrator | 2025-05-25 00:49:02 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:02.914145 | orchestrator | 2025-05-25 00:49:02 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:02.914858 | orchestrator | 2025-05-25 00:49:02 | INFO  | Task ca14e850-670f-475b-b30e-30ae56c9b718 is in state STARTED 2025-05-25 00:49:02.916157 | orchestrator | 2025-05-25 00:49:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:02.919607 | orchestrator | 2025-05-25 00:49:02 | INFO  | Task 6548229d-a90f-4262-9f26-79324b28d918 is in state STARTED 2025-05-25 00:49:02.920352 | orchestrator | 2025-05-25 00:49:02 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:02.920377 | orchestrator | 2025-05-25 00:49:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:05.975849 | orchestrator | 2025-05-25 00:49:05 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:05.976309 | orchestrator | 2025-05-25 00:49:05 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:05.976523 | orchestrator | 2025-05-25 00:49:05 | INFO  | Task ca14e850-670f-475b-b30e-30ae56c9b718 is in state STARTED 2025-05-25 00:49:05.977624 | orchestrator | 2025-05-25 00:49:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:05.977981 | orchestrator | 2025-05-25 00:49:05 | INFO  | Task 6548229d-a90f-4262-9f26-79324b28d918 is in state STARTED 2025-05-25 00:49:05.978617 | orchestrator | 2025-05-25 00:49:05 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:05.978675 | orchestrator | 2025-05-25 00:49:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:09.016280 | orchestrator | 2025-05-25 00:49:09 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:09.016826 | orchestrator | 2025-05-25 00:49:09 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:09.018266 | orchestrator | 2025-05-25 00:49:09 | INFO  | Task ca14e850-670f-475b-b30e-30ae56c9b718 is in state STARTED 2025-05-25 00:49:09.018588 | orchestrator | 2025-05-25 00:49:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:09.022299 | orchestrator | 2025-05-25 00:49:09 | INFO  | Task 6548229d-a90f-4262-9f26-79324b28d918 is in state STARTED 2025-05-25 00:49:09.025580 | orchestrator | 2025-05-25 00:49:09 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:09.025635 | orchestrator | 2025-05-25 00:49:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:12.069812 | orchestrator | 2025-05-25 00:49:12 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:12.072799 | orchestrator | 2025-05-25 00:49:12 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:12.072960 | orchestrator | 2025-05-25 00:49:12 | INFO  | Task ca14e850-670f-475b-b30e-30ae56c9b718 is in state STARTED 2025-05-25 00:49:12.082906 | orchestrator | 2025-05-25 00:49:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:12.083020 | orchestrator | 2025-05-25 00:49:12 | INFO  | Task 6548229d-a90f-4262-9f26-79324b28d918 is in state STARTED 2025-05-25 00:49:12.085686 | orchestrator | 2025-05-25 00:49:12 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:12.085744 | orchestrator | 2025-05-25 00:49:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:15.118825 | orchestrator | 2025-05-25 00:49:15 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:15.118917 | orchestrator | 2025-05-25 00:49:15 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:15.119695 | orchestrator | 2025-05-25 00:49:15 | INFO  | Task ca14e850-670f-475b-b30e-30ae56c9b718 is in state STARTED 2025-05-25 00:49:15.120365 | orchestrator | 2025-05-25 00:49:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:15.120944 | orchestrator | 2025-05-25 00:49:15 | INFO  | Task 6548229d-a90f-4262-9f26-79324b28d918 is in state STARTED 2025-05-25 00:49:15.121740 | orchestrator | 2025-05-25 00:49:15 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:15.121773 | orchestrator | 2025-05-25 00:49:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:18.172001 | orchestrator | 2025-05-25 00:49:18 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:18.173983 | orchestrator | 2025-05-25 00:49:18 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:18.174113 | orchestrator | 2025-05-25 00:49:18 | INFO  | Task ca14e850-670f-475b-b30e-30ae56c9b718 is in state STARTED 2025-05-25 00:49:18.174910 | orchestrator | 2025-05-25 00:49:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:18.177635 | orchestrator | 2025-05-25 00:49:18 | INFO  | Task 6548229d-a90f-4262-9f26-79324b28d918 is in state STARTED 2025-05-25 00:49:18.179465 | orchestrator | 2025-05-25 00:49:18 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:18.179614 | orchestrator | 2025-05-25 00:49:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:21.226860 | orchestrator | 2025-05-25 00:49:21 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:21.226979 | orchestrator | 2025-05-25 00:49:21 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:21.226993 | orchestrator | 2025-05-25 00:49:21 | INFO  | Task ca14e850-670f-475b-b30e-30ae56c9b718 is in state STARTED 2025-05-25 00:49:21.227008 | orchestrator | 2025-05-25 00:49:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:21.227023 | orchestrator | 2025-05-25 00:49:21 | INFO  | Task 6548229d-a90f-4262-9f26-79324b28d918 is in state STARTED 2025-05-25 00:49:21.227409 | orchestrator | 2025-05-25 00:49:21 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:21.227506 | orchestrator | 2025-05-25 00:49:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:24.262902 | orchestrator | 2025-05-25 00:49:24 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:24.262980 | orchestrator | 2025-05-25 00:49:24 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:24.263377 | orchestrator | 2025-05-25 00:49:24 | INFO  | Task ca14e850-670f-475b-b30e-30ae56c9b718 is in state STARTED 2025-05-25 00:49:24.263930 | orchestrator | 2025-05-25 00:49:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:24.264385 | orchestrator | 2025-05-25 00:49:24 | INFO  | Task 6548229d-a90f-4262-9f26-79324b28d918 is in state SUCCESS 2025-05-25 00:49:24.265115 | orchestrator | 2025-05-25 00:49:24 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:49:24.265622 | orchestrator | 2025-05-25 00:49:24 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:24.265648 | orchestrator | 2025-05-25 00:49:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:27.294420 | orchestrator | 2025-05-25 00:49:27 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:27.294702 | orchestrator | 2025-05-25 00:49:27 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:27.295712 | orchestrator | 2025-05-25 00:49:27 | INFO  | Task ca14e850-670f-475b-b30e-30ae56c9b718 is in state STARTED 2025-05-25 00:49:27.296110 | orchestrator | 2025-05-25 00:49:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:27.296840 | orchestrator | 2025-05-25 00:49:27 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:49:27.297630 | orchestrator | 2025-05-25 00:49:27 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:27.297667 | orchestrator | 2025-05-25 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:30.318310 | orchestrator | 2025-05-25 00:49:30 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:30.319961 | orchestrator | 2025-05-25 00:49:30 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:30.320438 | orchestrator | 2025-05-25 00:49:30 | INFO  | Task ca14e850-670f-475b-b30e-30ae56c9b718 is in state STARTED 2025-05-25 00:49:30.320915 | orchestrator | 2025-05-25 00:49:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:30.321471 | orchestrator | 2025-05-25 00:49:30 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:49:30.324615 | orchestrator | 2025-05-25 00:49:30 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:30.324654 | orchestrator | 2025-05-25 00:49:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:33.356628 | orchestrator | 2025-05-25 00:49:33 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:33.356804 | orchestrator | 2025-05-25 00:49:33 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:33.357639 | orchestrator | 2025-05-25 00:49:33 | INFO  | Task ca14e850-670f-475b-b30e-30ae56c9b718 is in state STARTED 2025-05-25 00:49:33.358635 | orchestrator | 2025-05-25 00:49:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:33.359145 | orchestrator | 2025-05-25 00:49:33 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:49:33.359974 | orchestrator | 2025-05-25 00:49:33 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:33.360018 | orchestrator | 2025-05-25 00:49:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:36.401565 | orchestrator | 2025-05-25 00:49:36 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:36.402767 | orchestrator | 2025-05-25 00:49:36 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:36.404491 | orchestrator | 2025-05-25 00:49:36 | INFO  | Task ca14e850-670f-475b-b30e-30ae56c9b718 is in state STARTED 2025-05-25 00:49:36.405821 | orchestrator | 2025-05-25 00:49:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:36.408459 | orchestrator | 2025-05-25 00:49:36 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:49:36.411662 | orchestrator | 2025-05-25 00:49:36 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:36.411699 | orchestrator | 2025-05-25 00:49:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:39.444455 | orchestrator | 2025-05-25 00:49:39 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:39.447021 | orchestrator | 2025-05-25 00:49:39.447085 | orchestrator | 2025-05-25 00:49:39.447097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:49:39.447106 | orchestrator | 2025-05-25 00:49:39.447113 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:49:39.447121 | orchestrator | Sunday 25 May 2025 00:49:04 +0000 (0:00:00.699) 0:00:00.699 ************ 2025-05-25 00:49:39.447129 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:49:39.447137 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:49:39.447145 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:49:39.447152 | orchestrator | 2025-05-25 00:49:39.447160 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:49:39.447195 | orchestrator | Sunday 25 May 2025 00:49:04 +0000 (0:00:00.689) 0:00:01.388 ************ 2025-05-25 00:49:39.447206 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-25 00:49:39.447214 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-25 00:49:39.447222 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-25 00:49:39.447229 | orchestrator | 2025-05-25 00:49:39.447236 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-25 00:49:39.447244 | orchestrator | 2025-05-25 00:49:39.447251 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-25 00:49:39.447259 | orchestrator | Sunday 25 May 2025 00:49:05 +0000 (0:00:00.463) 0:00:01.851 ************ 2025-05-25 00:49:39.447266 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:49:39.447274 | orchestrator | 2025-05-25 00:49:39.447282 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-25 00:49:39.447289 | orchestrator | Sunday 25 May 2025 00:49:06 +0000 (0:00:01.026) 0:00:02.878 ************ 2025-05-25 00:49:39.447296 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-25 00:49:39.447304 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-25 00:49:39.447311 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-25 00:49:39.447318 | orchestrator | 2025-05-25 00:49:39.447326 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-25 00:49:39.447333 | orchestrator | Sunday 25 May 2025 00:49:07 +0000 (0:00:00.893) 0:00:03.772 ************ 2025-05-25 00:49:39.447341 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-25 00:49:39.447348 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-25 00:49:39.447355 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-25 00:49:39.447362 | orchestrator | 2025-05-25 00:49:39.447370 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-25 00:49:39.447396 | orchestrator | Sunday 25 May 2025 00:49:09 +0000 (0:00:02.020) 0:00:05.792 ************ 2025-05-25 00:49:39.447404 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:49:39.447412 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:49:39.447419 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:49:39.447426 | orchestrator | 2025-05-25 00:49:39.447433 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-25 00:49:39.447440 | orchestrator | Sunday 25 May 2025 00:49:12 +0000 (0:00:02.825) 0:00:08.618 ************ 2025-05-25 00:49:39.447447 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:49:39.447454 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:49:39.447461 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:49:39.447469 | orchestrator | 2025-05-25 00:49:39.447476 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:49:39.447496 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:49:39.447505 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:49:39.447524 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:49:39.447531 | orchestrator | 2025-05-25 00:49:39.447547 | orchestrator | 2025-05-25 00:49:39.447554 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:49:39.447599 | orchestrator | Sunday 25 May 2025 00:49:20 +0000 (0:00:08.671) 0:00:17.289 ************ 2025-05-25 00:49:39.447608 | orchestrator | =============================================================================== 2025-05-25 00:49:39.447616 | orchestrator | memcached : Restart memcached container --------------------------------- 8.67s 2025-05-25 00:49:39.447632 | orchestrator | memcached : Check memcached container ----------------------------------- 2.83s 2025-05-25 00:49:39.447641 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.02s 2025-05-25 00:49:39.447649 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.03s 2025-05-25 00:49:39.447676 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.89s 2025-05-25 00:49:39.447685 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2025-05-25 00:49:39.447694 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-05-25 00:49:39.447702 | orchestrator | 2025-05-25 00:49:39.447710 | orchestrator | 2025-05-25 00:49:39.447719 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:49:39.447730 | orchestrator | 2025-05-25 00:49:39.447743 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:49:39.447754 | orchestrator | Sunday 25 May 2025 00:49:05 +0000 (0:00:00.470) 0:00:00.470 ************ 2025-05-25 00:49:39.447767 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:49:39.447779 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:49:39.447791 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:49:39.447803 | orchestrator | 2025-05-25 00:49:39.447816 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:49:39.447846 | orchestrator | Sunday 25 May 2025 00:49:05 +0000 (0:00:00.649) 0:00:01.119 ************ 2025-05-25 00:49:39.447855 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-25 00:49:39.447862 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-25 00:49:39.447869 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-25 00:49:39.447876 | orchestrator | 2025-05-25 00:49:39.447883 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-25 00:49:39.447891 | orchestrator | 2025-05-25 00:49:39.447898 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-25 00:49:39.447905 | orchestrator | Sunday 25 May 2025 00:49:06 +0000 (0:00:00.584) 0:00:01.703 ************ 2025-05-25 00:49:39.447921 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:49:39.447929 | orchestrator | 2025-05-25 00:49:39.447936 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-25 00:49:39.447947 | orchestrator | Sunday 25 May 2025 00:49:07 +0000 (0:00:00.892) 0:00:02.596 ************ 2025-05-25 00:49:39.447962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.447980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448047 | orchestrator | 2025-05-25 00:49:39.448054 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-25 00:49:39.448062 | orchestrator | Sunday 25 May 2025 00:49:09 +0000 (0:00:01.965) 0:00:04.561 ************ 2025-05-25 00:49:39.448069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448137 | orchestrator | 2025-05-25 00:49:39.448145 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-25 00:49:39.448152 | orchestrator | Sunday 25 May 2025 00:49:12 +0000 (0:00:03.022) 0:00:07.583 ************ 2025-05-25 00:49:39.448159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448279 | orchestrator | 2025-05-25 00:49:39.448287 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-25 00:49:39.448294 | orchestrator | Sunday 25 May 2025 00:49:15 +0000 (0:00:03.240) 0:00:10.824 ************ 2025-05-25 00:49:39.448302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-25 00:49:39.448362 | orchestrator | 2025-05-25 00:49:39.448370 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-25 00:49:39.448377 | orchestrator | Sunday 25 May 2025 00:49:17 +0000 (0:00:02.242) 0:00:13.066 ************ 2025-05-25 00:49:39.448384 | orchestrator | 2025-05-25 00:49:39.448391 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-25 00:49:39.448398 | orchestrator | Sunday 25 May 2025 00:49:17 +0000 (0:00:00.123) 0:00:13.190 ************ 2025-05-25 00:49:39.448405 | orchestrator | 2025-05-25 00:49:39.448412 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-25 00:49:39.448419 | orchestrator | Sunday 25 May 2025 00:49:17 +0000 (0:00:00.081) 0:00:13.272 ************ 2025-05-25 00:49:39.448427 | orchestrator | 2025-05-25 00:49:39.448434 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-25 00:49:39.448441 | orchestrator | Sunday 25 May 2025 00:49:17 +0000 (0:00:00.101) 0:00:13.373 ************ 2025-05-25 00:49:39.448448 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:49:39.448456 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:49:39.448463 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:49:39.448470 | orchestrator | 2025-05-25 00:49:39.448477 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-25 00:49:39.448485 | orchestrator | Sunday 25 May 2025 00:49:27 +0000 (0:00:09.643) 0:00:23.016 ************ 2025-05-25 00:49:39.448492 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:49:39.448502 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:49:39.448514 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:49:39.448526 | orchestrator | 2025-05-25 00:49:39.448539 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:49:39.448551 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:49:39.448564 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:49:39.448577 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:49:39.448589 | orchestrator | 2025-05-25 00:49:39.448597 | orchestrator | 2025-05-25 00:49:39.448606 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:49:39.448618 | orchestrator | Sunday 25 May 2025 00:49:36 +0000 (0:00:08.538) 0:00:31.555 ************ 2025-05-25 00:49:39.448630 | orchestrator | =============================================================================== 2025-05-25 00:49:39.448641 | orchestrator | redis : Restart redis container ----------------------------------------- 9.64s 2025-05-25 00:49:39.448653 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.54s 2025-05-25 00:49:39.448664 | orchestrator | redis : Copying over redis config files --------------------------------- 3.24s 2025-05-25 00:49:39.448675 | orchestrator | redis : Copying over default config.json files -------------------------- 3.02s 2025-05-25 00:49:39.448687 | orchestrator | redis : Check redis containers ------------------------------------------ 2.24s 2025-05-25 00:49:39.448703 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.97s 2025-05-25 00:49:39.448722 | orchestrator | redis : include_tasks --------------------------------------------------- 0.89s 2025-05-25 00:49:39.448734 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2025-05-25 00:49:39.448744 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-05-25 00:49:39.448754 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.31s 2025-05-25 00:49:39.448764 | orchestrator | 2025-05-25 00:49:39 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:39.448775 | orchestrator | 2025-05-25 00:49:39 | INFO  | Task ca14e850-670f-475b-b30e-30ae56c9b718 is in state SUCCESS 2025-05-25 00:49:39.448786 | orchestrator | 2025-05-25 00:49:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:39.448797 | orchestrator | 2025-05-25 00:49:39 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:49:39.448911 | orchestrator | 2025-05-25 00:49:39 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:39.448927 | orchestrator | 2025-05-25 00:49:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:42.493551 | orchestrator | 2025-05-25 00:49:42 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:42.496148 | orchestrator | 2025-05-25 00:49:42 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:42.496946 | orchestrator | 2025-05-25 00:49:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:42.497516 | orchestrator | 2025-05-25 00:49:42 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:49:42.498998 | orchestrator | 2025-05-25 00:49:42 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:42.499016 | orchestrator | 2025-05-25 00:49:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:45.528622 | orchestrator | 2025-05-25 00:49:45 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:45.528733 | orchestrator | 2025-05-25 00:49:45 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:45.529095 | orchestrator | 2025-05-25 00:49:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:45.529709 | orchestrator | 2025-05-25 00:49:45 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:49:45.530388 | orchestrator | 2025-05-25 00:49:45 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:45.530418 | orchestrator | 2025-05-25 00:49:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:48.565050 | orchestrator | 2025-05-25 00:49:48 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:48.566397 | orchestrator | 2025-05-25 00:49:48 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:48.566746 | orchestrator | 2025-05-25 00:49:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:48.567385 | orchestrator | 2025-05-25 00:49:48 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:49:48.567958 | orchestrator | 2025-05-25 00:49:48 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:48.567981 | orchestrator | 2025-05-25 00:49:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:51.615275 | orchestrator | 2025-05-25 00:49:51 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:51.615812 | orchestrator | 2025-05-25 00:49:51 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:51.616740 | orchestrator | 2025-05-25 00:49:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:51.617363 | orchestrator | 2025-05-25 00:49:51 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:49:51.618126 | orchestrator | 2025-05-25 00:49:51 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:51.618206 | orchestrator | 2025-05-25 00:49:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:54.661117 | orchestrator | 2025-05-25 00:49:54 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:54.662632 | orchestrator | 2025-05-25 00:49:54 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:54.665467 | orchestrator | 2025-05-25 00:49:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:54.666702 | orchestrator | 2025-05-25 00:49:54 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:49:54.667871 | orchestrator | 2025-05-25 00:49:54 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:54.667913 | orchestrator | 2025-05-25 00:49:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:49:57.715126 | orchestrator | 2025-05-25 00:49:57 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:49:57.716145 | orchestrator | 2025-05-25 00:49:57 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:49:57.720100 | orchestrator | 2025-05-25 00:49:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:49:57.720778 | orchestrator | 2025-05-25 00:49:57 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:49:57.721712 | orchestrator | 2025-05-25 00:49:57 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:49:57.721923 | orchestrator | 2025-05-25 00:49:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:00.764462 | orchestrator | 2025-05-25 00:50:00 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:00.764587 | orchestrator | 2025-05-25 00:50:00 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:00.764900 | orchestrator | 2025-05-25 00:50:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:00.766189 | orchestrator | 2025-05-25 00:50:00 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:00.767400 | orchestrator | 2025-05-25 00:50:00 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:50:00.767430 | orchestrator | 2025-05-25 00:50:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:03.809417 | orchestrator | 2025-05-25 00:50:03 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:03.809536 | orchestrator | 2025-05-25 00:50:03 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:03.812080 | orchestrator | 2025-05-25 00:50:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:03.813846 | orchestrator | 2025-05-25 00:50:03 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:03.814732 | orchestrator | 2025-05-25 00:50:03 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:50:03.814824 | orchestrator | 2025-05-25 00:50:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:06.850501 | orchestrator | 2025-05-25 00:50:06 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:06.850636 | orchestrator | 2025-05-25 00:50:06 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:06.850961 | orchestrator | 2025-05-25 00:50:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:06.851503 | orchestrator | 2025-05-25 00:50:06 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:06.852191 | orchestrator | 2025-05-25 00:50:06 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:50:06.852231 | orchestrator | 2025-05-25 00:50:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:09.914333 | orchestrator | 2025-05-25 00:50:09 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:09.914486 | orchestrator | 2025-05-25 00:50:09 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:09.914504 | orchestrator | 2025-05-25 00:50:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:09.914592 | orchestrator | 2025-05-25 00:50:09 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:09.915126 | orchestrator | 2025-05-25 00:50:09 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:50:09.915275 | orchestrator | 2025-05-25 00:50:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:12.966954 | orchestrator | 2025-05-25 00:50:12 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:12.967077 | orchestrator | 2025-05-25 00:50:12 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:12.969792 | orchestrator | 2025-05-25 00:50:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:12.972830 | orchestrator | 2025-05-25 00:50:12 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:12.975244 | orchestrator | 2025-05-25 00:50:12 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state STARTED 2025-05-25 00:50:12.975275 | orchestrator | 2025-05-25 00:50:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:16.017961 | orchestrator | 2025-05-25 00:50:16 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:16.019856 | orchestrator | 2025-05-25 00:50:16 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:16.022113 | orchestrator | 2025-05-25 00:50:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:16.023706 | orchestrator | 2025-05-25 00:50:16 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:16.025067 | orchestrator | 2025-05-25 00:50:16 | INFO  | Task 2d7838f5-5c9c-4dbb-bb0c-ce7c582f834e is in state SUCCESS 2025-05-25 00:50:16.026809 | orchestrator | 2025-05-25 00:50:16.026845 | orchestrator | 2025-05-25 00:50:16.026858 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:50:16.026869 | orchestrator | 2025-05-25 00:50:16.026881 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:50:16.026893 | orchestrator | Sunday 25 May 2025 00:49:04 +0000 (0:00:00.559) 0:00:00.559 ************ 2025-05-25 00:50:16.026904 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:50:16.026916 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:50:16.026927 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:50:16.026963 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:50:16.026979 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:50:16.026991 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:50:16.027002 | orchestrator | 2025-05-25 00:50:16.027012 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:50:16.027024 | orchestrator | Sunday 25 May 2025 00:49:05 +0000 (0:00:00.954) 0:00:01.514 ************ 2025-05-25 00:50:16.027035 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-25 00:50:16.027046 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-25 00:50:16.027057 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-25 00:50:16.027068 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-25 00:50:16.027079 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-25 00:50:16.027090 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-25 00:50:16.027100 | orchestrator | 2025-05-25 00:50:16.027111 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-25 00:50:16.027122 | orchestrator | 2025-05-25 00:50:16.027132 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-25 00:50:16.027144 | orchestrator | Sunday 25 May 2025 00:49:06 +0000 (0:00:01.096) 0:00:02.610 ************ 2025-05-25 00:50:16.027199 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:50:16.027213 | orchestrator | 2025-05-25 00:50:16.027224 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-25 00:50:16.027235 | orchestrator | Sunday 25 May 2025 00:49:08 +0000 (0:00:02.102) 0:00:04.713 ************ 2025-05-25 00:50:16.027246 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-25 00:50:16.027257 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-25 00:50:16.027268 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-25 00:50:16.027279 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-25 00:50:16.027289 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-25 00:50:16.027300 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-25 00:50:16.027311 | orchestrator | 2025-05-25 00:50:16.027322 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-25 00:50:16.027332 | orchestrator | Sunday 25 May 2025 00:49:10 +0000 (0:00:02.001) 0:00:06.715 ************ 2025-05-25 00:50:16.027343 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-25 00:50:16.027354 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-25 00:50:16.027365 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-25 00:50:16.027375 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-25 00:50:16.027387 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-25 00:50:16.027398 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-25 00:50:16.027408 | orchestrator | 2025-05-25 00:50:16.027419 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-25 00:50:16.027432 | orchestrator | Sunday 25 May 2025 00:49:12 +0000 (0:00:01.973) 0:00:08.688 ************ 2025-05-25 00:50:16.027445 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-25 00:50:16.027458 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:50:16.027472 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-25 00:50:16.027484 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:50:16.027513 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-25 00:50:16.027526 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:50:16.027539 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-25 00:50:16.027561 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:50:16.027573 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-25 00:50:16.027586 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:50:16.027599 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-25 00:50:16.027612 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:50:16.027624 | orchestrator | 2025-05-25 00:50:16.027637 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-25 00:50:16.027649 | orchestrator | Sunday 25 May 2025 00:49:14 +0000 (0:00:01.824) 0:00:10.513 ************ 2025-05-25 00:50:16.027661 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:50:16.027674 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:50:16.027687 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:50:16.027699 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:50:16.027713 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:50:16.027725 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:50:16.027738 | orchestrator | 2025-05-25 00:50:16.027751 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-25 00:50:16.027764 | orchestrator | Sunday 25 May 2025 00:49:14 +0000 (0:00:00.745) 0:00:11.259 ************ 2025-05-25 00:50:16.027795 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.027812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.027825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.027836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.027861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.027880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.027892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.027903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.027914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.027925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.027948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.027966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.027978 | orchestrator | 2025-05-25 00:50:16.027989 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-25 00:50:16.028000 | orchestrator | Sunday 25 May 2025 00:49:16 +0000 (0:00:01.871) 0:00:13.131 ************ 2025-05-25 00:50:16.028012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028024 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028035 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028070 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028100 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028111 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028209 | orchestrator | 2025-05-25 00:50:16.028220 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-25 00:50:16.028231 | orchestrator | Sunday 25 May 2025 00:49:20 +0000 (0:00:03.790) 0:00:16.921 ************ 2025-05-25 00:50:16.028242 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:50:16.028253 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:50:16.028263 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:50:16.028274 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:50:16.028285 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:50:16.028295 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:50:16.028306 | orchestrator | 2025-05-25 00:50:16.028317 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-25 00:50:16.028328 | orchestrator | Sunday 25 May 2025 00:49:23 +0000 (0:00:02.627) 0:00:19.548 ************ 2025-05-25 00:50:16.028338 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:50:16.028349 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:50:16.028367 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:50:16.028378 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:50:16.028389 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:50:16.028400 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:50:16.028410 | orchestrator | 2025-05-25 00:50:16.028421 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-25 00:50:16.028432 | orchestrator | Sunday 25 May 2025 00:49:25 +0000 (0:00:02.297) 0:00:21.846 ************ 2025-05-25 00:50:16.028443 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:50:16.028453 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:50:16.028464 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:50:16.028474 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:50:16.028485 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:50:16.028496 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:50:16.028506 | orchestrator | 2025-05-25 00:50:16.028517 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-25 00:50:16.028528 | orchestrator | Sunday 25 May 2025 00:49:26 +0000 (0:00:01.192) 0:00:23.039 ************ 2025-05-25 00:50:16.028540 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-25 00:50:16.028726 | orchestrator | 2025-05-25 00:50:16.028737 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-25 00:50:16.028748 | orchestrator | Sunday 25 May 2025 00:49:29 +0000 (0:00:02.626) 0:00:25.666 ************ 2025-05-25 00:50:16.028759 | orchestrator | 2025-05-25 00:50:16.028769 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-25 00:50:16.028780 | orchestrator | Sunday 25 May 2025 00:49:29 +0000 (0:00:00.202) 0:00:25.868 ************ 2025-05-25 00:50:16.028791 | orchestrator | 2025-05-25 00:50:16.028806 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-25 00:50:16.028817 | orchestrator | Sunday 25 May 2025 00:49:29 +0000 (0:00:00.225) 0:00:26.094 ************ 2025-05-25 00:50:16.028828 | orchestrator | 2025-05-25 00:50:16.028838 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-25 00:50:16.028849 | orchestrator | Sunday 25 May 2025 00:49:29 +0000 (0:00:00.121) 0:00:26.215 ************ 2025-05-25 00:50:16.028867 | orchestrator | 2025-05-25 00:50:16.028885 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-25 00:50:16.028903 | orchestrator | Sunday 25 May 2025 00:49:29 +0000 (0:00:00.194) 0:00:26.409 ************ 2025-05-25 00:50:16.028922 | orchestrator | 2025-05-25 00:50:16.028940 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-25 00:50:16.028959 | orchestrator | Sunday 25 May 2025 00:49:30 +0000 (0:00:00.099) 0:00:26.509 ************ 2025-05-25 00:50:16.028978 | orchestrator | 2025-05-25 00:50:16.028996 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-25 00:50:16.029014 | orchestrator | Sunday 25 May 2025 00:49:30 +0000 (0:00:00.207) 0:00:26.717 ************ 2025-05-25 00:50:16.029025 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:50:16.029036 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:50:16.029046 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:50:16.029057 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:50:16.029068 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:50:16.029078 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:50:16.029097 | orchestrator | 2025-05-25 00:50:16.029108 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-25 00:50:16.029119 | orchestrator | Sunday 25 May 2025 00:49:40 +0000 (0:00:10.009) 0:00:36.727 ************ 2025-05-25 00:50:16.029137 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:50:16.029148 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:50:16.029179 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:50:16.029189 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:50:16.029200 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:50:16.029211 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:50:16.029221 | orchestrator | 2025-05-25 00:50:16.029232 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-25 00:50:16.029243 | orchestrator | Sunday 25 May 2025 00:49:42 +0000 (0:00:01.730) 0:00:38.457 ************ 2025-05-25 00:50:16.029254 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:50:16.029265 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:50:16.029275 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:50:16.029286 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:50:16.029296 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:50:16.029307 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:50:16.029318 | orchestrator | 2025-05-25 00:50:16.029328 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-25 00:50:16.029339 | orchestrator | Sunday 25 May 2025 00:49:51 +0000 (0:00:09.549) 0:00:48.007 ************ 2025-05-25 00:50:16.029350 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-25 00:50:16.029361 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-25 00:50:16.029372 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-25 00:50:16.029383 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-25 00:50:16.029394 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-25 00:50:16.029404 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-25 00:50:16.029415 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-25 00:50:16.029426 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-25 00:50:16.029436 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-25 00:50:16.029447 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-25 00:50:16.029458 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-25 00:50:16.029468 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-25 00:50:16.029479 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-25 00:50:16.029490 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-25 00:50:16.029500 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-25 00:50:16.029511 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-25 00:50:16.029521 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-25 00:50:16.029532 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-25 00:50:16.029549 | orchestrator | 2025-05-25 00:50:16.029566 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-25 00:50:16.029577 | orchestrator | Sunday 25 May 2025 00:49:59 +0000 (0:00:07.902) 0:00:55.910 ************ 2025-05-25 00:50:16.029588 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-25 00:50:16.029599 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:50:16.029610 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-25 00:50:16.029620 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:50:16.029631 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-25 00:50:16.029642 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:50:16.029652 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-25 00:50:16.029663 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-25 00:50:16.029674 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-25 00:50:16.029684 | orchestrator | 2025-05-25 00:50:16.029695 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-25 00:50:16.029705 | orchestrator | Sunday 25 May 2025 00:50:02 +0000 (0:00:03.030) 0:00:58.941 ************ 2025-05-25 00:50:16.029716 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-25 00:50:16.029727 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:50:16.029738 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-25 00:50:16.029748 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:50:16.029759 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-25 00:50:16.029770 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:50:16.029781 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-25 00:50:16.029798 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-25 00:50:16.029809 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-25 00:50:16.029820 | orchestrator | 2025-05-25 00:50:16.029831 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-25 00:50:16.029841 | orchestrator | Sunday 25 May 2025 00:50:06 +0000 (0:00:03.535) 0:01:02.477 ************ 2025-05-25 00:50:16.029852 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:50:16.029862 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:50:16.029873 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:50:16.029884 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:50:16.029894 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:50:16.029905 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:50:16.029916 | orchestrator | 2025-05-25 00:50:16.029926 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:50:16.029937 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 00:50:16.029950 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 00:50:16.029961 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-25 00:50:16.029972 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 00:50:16.029983 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 00:50:16.029994 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-25 00:50:16.030004 | orchestrator | 2025-05-25 00:50:16.030066 | orchestrator | 2025-05-25 00:50:16.030080 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:50:16.030099 | orchestrator | Sunday 25 May 2025 00:50:13 +0000 (0:00:07.175) 0:01:09.653 ************ 2025-05-25 00:50:16.030109 | orchestrator | =============================================================================== 2025-05-25 00:50:16.030120 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.73s 2025-05-25 00:50:16.030131 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.01s 2025-05-25 00:50:16.030142 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.90s 2025-05-25 00:50:16.030200 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.79s 2025-05-25 00:50:16.030213 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.54s 2025-05-25 00:50:16.030224 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.03s 2025-05-25 00:50:16.030235 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 2.63s 2025-05-25 00:50:16.030245 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.63s 2025-05-25 00:50:16.030256 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.30s 2025-05-25 00:50:16.030267 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.10s 2025-05-25 00:50:16.030278 | orchestrator | module-load : Load modules ---------------------------------------------- 2.00s 2025-05-25 00:50:16.030289 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.98s 2025-05-25 00:50:16.030299 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.87s 2025-05-25 00:50:16.030310 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.82s 2025-05-25 00:50:16.030326 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.73s 2025-05-25 00:50:16.030337 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.19s 2025-05-25 00:50:16.030348 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.10s 2025-05-25 00:50:16.030358 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.05s 2025-05-25 00:50:16.030367 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.95s 2025-05-25 00:50:16.030377 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.75s 2025-05-25 00:50:16.030386 | orchestrator | 2025-05-25 00:50:16 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:16.030396 | orchestrator | 2025-05-25 00:50:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:19.065013 | orchestrator | 2025-05-25 00:50:19 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:19.067348 | orchestrator | 2025-05-25 00:50:19 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:19.068428 | orchestrator | 2025-05-25 00:50:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:19.070536 | orchestrator | 2025-05-25 00:50:19 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:19.072850 | orchestrator | 2025-05-25 00:50:19 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:19.072887 | orchestrator | 2025-05-25 00:50:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:22.155892 | orchestrator | 2025-05-25 00:50:22 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:22.156002 | orchestrator | 2025-05-25 00:50:22 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:22.156019 | orchestrator | 2025-05-25 00:50:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:22.156104 | orchestrator | 2025-05-25 00:50:22 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:22.156685 | orchestrator | 2025-05-25 00:50:22 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:22.156787 | orchestrator | 2025-05-25 00:50:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:25.196403 | orchestrator | 2025-05-25 00:50:25 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:25.196534 | orchestrator | 2025-05-25 00:50:25 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:25.196549 | orchestrator | 2025-05-25 00:50:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:25.196561 | orchestrator | 2025-05-25 00:50:25 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:25.196572 | orchestrator | 2025-05-25 00:50:25 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:25.196583 | orchestrator | 2025-05-25 00:50:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:28.228702 | orchestrator | 2025-05-25 00:50:28 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:28.229126 | orchestrator | 2025-05-25 00:50:28 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:28.231752 | orchestrator | 2025-05-25 00:50:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:28.233743 | orchestrator | 2025-05-25 00:50:28 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:28.235480 | orchestrator | 2025-05-25 00:50:28 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:28.235522 | orchestrator | 2025-05-25 00:50:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:31.298784 | orchestrator | 2025-05-25 00:50:31 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:31.299293 | orchestrator | 2025-05-25 00:50:31 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:31.300189 | orchestrator | 2025-05-25 00:50:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:31.300901 | orchestrator | 2025-05-25 00:50:31 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:31.301591 | orchestrator | 2025-05-25 00:50:31 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:31.301641 | orchestrator | 2025-05-25 00:50:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:34.359742 | orchestrator | 2025-05-25 00:50:34 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:34.359859 | orchestrator | 2025-05-25 00:50:34 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:34.359875 | orchestrator | 2025-05-25 00:50:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:34.359886 | orchestrator | 2025-05-25 00:50:34 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:34.362776 | orchestrator | 2025-05-25 00:50:34 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:34.362815 | orchestrator | 2025-05-25 00:50:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:37.389604 | orchestrator | 2025-05-25 00:50:37 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:37.389753 | orchestrator | 2025-05-25 00:50:37 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:37.389939 | orchestrator | 2025-05-25 00:50:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:37.390476 | orchestrator | 2025-05-25 00:50:37 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:37.391007 | orchestrator | 2025-05-25 00:50:37 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:37.391030 | orchestrator | 2025-05-25 00:50:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:40.425773 | orchestrator | 2025-05-25 00:50:40 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:40.428445 | orchestrator | 2025-05-25 00:50:40 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:40.430475 | orchestrator | 2025-05-25 00:50:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:40.432546 | orchestrator | 2025-05-25 00:50:40 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:40.432860 | orchestrator | 2025-05-25 00:50:40 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:40.432985 | orchestrator | 2025-05-25 00:50:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:43.467847 | orchestrator | 2025-05-25 00:50:43 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:43.470974 | orchestrator | 2025-05-25 00:50:43 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:43.472208 | orchestrator | 2025-05-25 00:50:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:43.478139 | orchestrator | 2025-05-25 00:50:43 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:43.484640 | orchestrator | 2025-05-25 00:50:43 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:43.484706 | orchestrator | 2025-05-25 00:50:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:46.527931 | orchestrator | 2025-05-25 00:50:46 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:46.528583 | orchestrator | 2025-05-25 00:50:46 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:46.530367 | orchestrator | 2025-05-25 00:50:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:46.532485 | orchestrator | 2025-05-25 00:50:46 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:46.534764 | orchestrator | 2025-05-25 00:50:46 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:46.535353 | orchestrator | 2025-05-25 00:50:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:49.591639 | orchestrator | 2025-05-25 00:50:49 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:49.592084 | orchestrator | 2025-05-25 00:50:49 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:49.594407 | orchestrator | 2025-05-25 00:50:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:49.595778 | orchestrator | 2025-05-25 00:50:49 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:49.597388 | orchestrator | 2025-05-25 00:50:49 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:49.597430 | orchestrator | 2025-05-25 00:50:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:52.643539 | orchestrator | 2025-05-25 00:50:52 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:52.646293 | orchestrator | 2025-05-25 00:50:52 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:52.648365 | orchestrator | 2025-05-25 00:50:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:52.651449 | orchestrator | 2025-05-25 00:50:52 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:52.652106 | orchestrator | 2025-05-25 00:50:52 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:52.652241 | orchestrator | 2025-05-25 00:50:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:55.698373 | orchestrator | 2025-05-25 00:50:55 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:55.698477 | orchestrator | 2025-05-25 00:50:55 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:55.698933 | orchestrator | 2025-05-25 00:50:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:55.700719 | orchestrator | 2025-05-25 00:50:55 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:55.702388 | orchestrator | 2025-05-25 00:50:55 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:55.702430 | orchestrator | 2025-05-25 00:50:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:50:58.743878 | orchestrator | 2025-05-25 00:50:58 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:50:58.744475 | orchestrator | 2025-05-25 00:50:58 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:50:58.745098 | orchestrator | 2025-05-25 00:50:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:50:58.746526 | orchestrator | 2025-05-25 00:50:58 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:50:58.747078 | orchestrator | 2025-05-25 00:50:58 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:50:58.747324 | orchestrator | 2025-05-25 00:50:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:01.785756 | orchestrator | 2025-05-25 00:51:01 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:01.787265 | orchestrator | 2025-05-25 00:51:01 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:01.788912 | orchestrator | 2025-05-25 00:51:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:01.790100 | orchestrator | 2025-05-25 00:51:01 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:51:01.791574 | orchestrator | 2025-05-25 00:51:01 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:01.791921 | orchestrator | 2025-05-25 00:51:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:04.846749 | orchestrator | 2025-05-25 00:51:04 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:04.847796 | orchestrator | 2025-05-25 00:51:04 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:04.849653 | orchestrator | 2025-05-25 00:51:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:04.850789 | orchestrator | 2025-05-25 00:51:04 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:51:04.852996 | orchestrator | 2025-05-25 00:51:04 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:04.853069 | orchestrator | 2025-05-25 00:51:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:07.895718 | orchestrator | 2025-05-25 00:51:07 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:07.895839 | orchestrator | 2025-05-25 00:51:07 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:07.896204 | orchestrator | 2025-05-25 00:51:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:07.896951 | orchestrator | 2025-05-25 00:51:07 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:51:07.897651 | orchestrator | 2025-05-25 00:51:07 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:07.897702 | orchestrator | 2025-05-25 00:51:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:10.946798 | orchestrator | 2025-05-25 00:51:10 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:10.946904 | orchestrator | 2025-05-25 00:51:10 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:10.947434 | orchestrator | 2025-05-25 00:51:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:10.947888 | orchestrator | 2025-05-25 00:51:10 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:51:10.948615 | orchestrator | 2025-05-25 00:51:10 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:10.948643 | orchestrator | 2025-05-25 00:51:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:13.986516 | orchestrator | 2025-05-25 00:51:13 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:13.986605 | orchestrator | 2025-05-25 00:51:13 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:13.986615 | orchestrator | 2025-05-25 00:51:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:13.986623 | orchestrator | 2025-05-25 00:51:13 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:51:13.986630 | orchestrator | 2025-05-25 00:51:13 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:13.986637 | orchestrator | 2025-05-25 00:51:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:17.010243 | orchestrator | 2025-05-25 00:51:17 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:17.011801 | orchestrator | 2025-05-25 00:51:17 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:17.015003 | orchestrator | 2025-05-25 00:51:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:17.018461 | orchestrator | 2025-05-25 00:51:17 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:51:17.018822 | orchestrator | 2025-05-25 00:51:17 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:17.018851 | orchestrator | 2025-05-25 00:51:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:20.044695 | orchestrator | 2025-05-25 00:51:20 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:20.045091 | orchestrator | 2025-05-25 00:51:20 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:20.045942 | orchestrator | 2025-05-25 00:51:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:20.046602 | orchestrator | 2025-05-25 00:51:20 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:51:20.048229 | orchestrator | 2025-05-25 00:51:20 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:20.048261 | orchestrator | 2025-05-25 00:51:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:23.085662 | orchestrator | 2025-05-25 00:51:23 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:23.085771 | orchestrator | 2025-05-25 00:51:23 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:23.085788 | orchestrator | 2025-05-25 00:51:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:23.085895 | orchestrator | 2025-05-25 00:51:23 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:51:23.086212 | orchestrator | 2025-05-25 00:51:23 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:23.086240 | orchestrator | 2025-05-25 00:51:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:26.121278 | orchestrator | 2025-05-25 00:51:26 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:26.121386 | orchestrator | 2025-05-25 00:51:26 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:26.121474 | orchestrator | 2025-05-25 00:51:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:26.121836 | orchestrator | 2025-05-25 00:51:26 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:51:26.123370 | orchestrator | 2025-05-25 00:51:26 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:26.123401 | orchestrator | 2025-05-25 00:51:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:29.155876 | orchestrator | 2025-05-25 00:51:29 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:29.158472 | orchestrator | 2025-05-25 00:51:29 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:29.158835 | orchestrator | 2025-05-25 00:51:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:29.159747 | orchestrator | 2025-05-25 00:51:29 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:51:29.161110 | orchestrator | 2025-05-25 00:51:29 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:29.161158 | orchestrator | 2025-05-25 00:51:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:32.192709 | orchestrator | 2025-05-25 00:51:32 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:32.193006 | orchestrator | 2025-05-25 00:51:32 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:32.193954 | orchestrator | 2025-05-25 00:51:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:32.194758 | orchestrator | 2025-05-25 00:51:32 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:51:32.195831 | orchestrator | 2025-05-25 00:51:32 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:32.195924 | orchestrator | 2025-05-25 00:51:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:35.237219 | orchestrator | 2025-05-25 00:51:35 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:35.237527 | orchestrator | 2025-05-25 00:51:35 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:35.237551 | orchestrator | 2025-05-25 00:51:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:35.237954 | orchestrator | 2025-05-25 00:51:35 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state STARTED 2025-05-25 00:51:35.238712 | orchestrator | 2025-05-25 00:51:35 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:35.238791 | orchestrator | 2025-05-25 00:51:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:38.270092 | orchestrator | 2025-05-25 00:51:38 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:38.270367 | orchestrator | 2025-05-25 00:51:38 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:38.270392 | orchestrator | 2025-05-25 00:51:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:38.270876 | orchestrator | 2025-05-25 00:51:38 | INFO  | Task 579267ca-b809-4669-920e-0beffc897349 is in state SUCCESS 2025-05-25 00:51:38.272896 | orchestrator | 2025-05-25 00:51:38.272942 | orchestrator | 2025-05-25 00:51:38.272955 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-25 00:51:38.272967 | orchestrator | 2025-05-25 00:51:38.272979 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-25 00:51:38.272990 | orchestrator | Sunday 25 May 2025 00:49:25 +0000 (0:00:00.126) 0:00:00.126 ************ 2025-05-25 00:51:38.273001 | orchestrator | ok: [localhost] => { 2025-05-25 00:51:38.273015 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-25 00:51:38.273026 | orchestrator | } 2025-05-25 00:51:38.273038 | orchestrator | 2025-05-25 00:51:38.273049 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-25 00:51:38.273060 | orchestrator | Sunday 25 May 2025 00:49:25 +0000 (0:00:00.035) 0:00:00.162 ************ 2025-05-25 00:51:38.273072 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-25 00:51:38.273085 | orchestrator | ...ignoring 2025-05-25 00:51:38.273097 | orchestrator | 2025-05-25 00:51:38.273108 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-25 00:51:38.273153 | orchestrator | Sunday 25 May 2025 00:49:27 +0000 (0:00:02.537) 0:00:02.699 ************ 2025-05-25 00:51:38.273166 | orchestrator | skipping: [localhost] 2025-05-25 00:51:38.273177 | orchestrator | 2025-05-25 00:51:38.273188 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-25 00:51:38.273199 | orchestrator | Sunday 25 May 2025 00:49:27 +0000 (0:00:00.047) 0:00:02.747 ************ 2025-05-25 00:51:38.273210 | orchestrator | ok: [localhost] 2025-05-25 00:51:38.273221 | orchestrator | 2025-05-25 00:51:38.273231 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:51:38.273242 | orchestrator | 2025-05-25 00:51:38.273261 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:51:38.273273 | orchestrator | Sunday 25 May 2025 00:49:28 +0000 (0:00:00.129) 0:00:02.876 ************ 2025-05-25 00:51:38.273284 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:51:38.273296 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:51:38.273306 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:51:38.273317 | orchestrator | 2025-05-25 00:51:38.273328 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:51:38.273339 | orchestrator | Sunday 25 May 2025 00:49:28 +0000 (0:00:00.554) 0:00:03.430 ************ 2025-05-25 00:51:38.273350 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-25 00:51:38.273382 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-25 00:51:38.273393 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-25 00:51:38.273404 | orchestrator | 2025-05-25 00:51:38.273415 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-25 00:51:38.273426 | orchestrator | 2025-05-25 00:51:38.273436 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-25 00:51:38.273447 | orchestrator | Sunday 25 May 2025 00:49:29 +0000 (0:00:00.384) 0:00:03.815 ************ 2025-05-25 00:51:38.273458 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:51:38.273469 | orchestrator | 2025-05-25 00:51:38.273482 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-25 00:51:38.273495 | orchestrator | Sunday 25 May 2025 00:49:29 +0000 (0:00:00.662) 0:00:04.478 ************ 2025-05-25 00:51:38.273507 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:51:38.273519 | orchestrator | 2025-05-25 00:51:38.273532 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-25 00:51:38.273545 | orchestrator | Sunday 25 May 2025 00:49:30 +0000 (0:00:01.061) 0:00:05.540 ************ 2025-05-25 00:51:38.273557 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:51:38.273570 | orchestrator | 2025-05-25 00:51:38.273582 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-25 00:51:38.273594 | orchestrator | Sunday 25 May 2025 00:49:31 +0000 (0:00:00.529) 0:00:06.069 ************ 2025-05-25 00:51:38.273607 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:51:38.273619 | orchestrator | 2025-05-25 00:51:38.273632 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-25 00:51:38.273644 | orchestrator | Sunday 25 May 2025 00:49:31 +0000 (0:00:00.595) 0:00:06.665 ************ 2025-05-25 00:51:38.273657 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:51:38.273669 | orchestrator | 2025-05-25 00:51:38.273682 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-25 00:51:38.273694 | orchestrator | Sunday 25 May 2025 00:49:32 +0000 (0:00:00.354) 0:00:07.019 ************ 2025-05-25 00:51:38.273707 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:51:38.273718 | orchestrator | 2025-05-25 00:51:38.273729 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-25 00:51:38.273740 | orchestrator | Sunday 25 May 2025 00:49:32 +0000 (0:00:00.336) 0:00:07.355 ************ 2025-05-25 00:51:38.273751 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:51:38.273761 | orchestrator | 2025-05-25 00:51:38.273772 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-25 00:51:38.273783 | orchestrator | Sunday 25 May 2025 00:49:33 +0000 (0:00:00.715) 0:00:08.071 ************ 2025-05-25 00:51:38.273794 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:51:38.273804 | orchestrator | 2025-05-25 00:51:38.273815 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-25 00:51:38.273826 | orchestrator | Sunday 25 May 2025 00:49:34 +0000 (0:00:00.746) 0:00:08.818 ************ 2025-05-25 00:51:38.273836 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:51:38.273847 | orchestrator | 2025-05-25 00:51:38.273858 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-25 00:51:38.273868 | orchestrator | Sunday 25 May 2025 00:49:34 +0000 (0:00:00.306) 0:00:09.124 ************ 2025-05-25 00:51:38.273879 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:51:38.273890 | orchestrator | 2025-05-25 00:51:38.273911 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-25 00:51:38.273923 | orchestrator | Sunday 25 May 2025 00:49:34 +0000 (0:00:00.343) 0:00:09.468 ************ 2025-05-25 00:51:38.273939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 00:51:38.273968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 00:51:38.273982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 00:51:38.273994 | orchestrator | 2025-05-25 00:51:38.274005 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-25 00:51:38.274092 | orchestrator | Sunday 25 May 2025 00:49:35 +0000 (0:00:00.878) 0:00:10.347 ************ 2025-05-25 00:51:38.274142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 00:51:38.274170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 00:51:38.274184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 00:51:38.274196 | orchestrator | 2025-05-25 00:51:38.274207 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-25 00:51:38.274218 | orchestrator | Sunday 25 May 2025 00:49:37 +0000 (0:00:01.653) 0:00:12.000 ************ 2025-05-25 00:51:38.274229 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-25 00:51:38.274240 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-25 00:51:38.274251 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-25 00:51:38.274262 | orchestrator | 2025-05-25 00:51:38.274273 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-25 00:51:38.274284 | orchestrator | Sunday 25 May 2025 00:49:39 +0000 (0:00:01.821) 0:00:13.822 ************ 2025-05-25 00:51:38.274295 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-25 00:51:38.274306 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-25 00:51:38.274317 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-25 00:51:38.274328 | orchestrator | 2025-05-25 00:51:38.274339 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-25 00:51:38.274350 | orchestrator | Sunday 25 May 2025 00:49:42 +0000 (0:00:03.155) 0:00:16.977 ************ 2025-05-25 00:51:38.274368 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-25 00:51:38.274379 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-25 00:51:38.274390 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-25 00:51:38.274401 | orchestrator | 2025-05-25 00:51:38.274418 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-25 00:51:38.274429 | orchestrator | Sunday 25 May 2025 00:49:44 +0000 (0:00:01.840) 0:00:18.818 ************ 2025-05-25 00:51:38.274440 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-25 00:51:38.274451 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-25 00:51:38.274462 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-25 00:51:38.274473 | orchestrator | 2025-05-25 00:51:38.274485 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-25 00:51:38.274496 | orchestrator | Sunday 25 May 2025 00:49:45 +0000 (0:00:01.586) 0:00:20.405 ************ 2025-05-25 00:51:38.274507 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-25 00:51:38.274519 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-25 00:51:38.274530 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-25 00:51:38.274541 | orchestrator | 2025-05-25 00:51:38.274552 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-25 00:51:38.274563 | orchestrator | Sunday 25 May 2025 00:49:46 +0000 (0:00:01.251) 0:00:21.657 ************ 2025-05-25 00:51:38.274574 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-25 00:51:38.274585 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-25 00:51:38.274600 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-25 00:51:38.274611 | orchestrator | 2025-05-25 00:51:38.274623 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-25 00:51:38.274635 | orchestrator | Sunday 25 May 2025 00:49:48 +0000 (0:00:01.537) 0:00:23.194 ************ 2025-05-25 00:51:38.274646 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:51:38.274657 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:51:38.274668 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:51:38.274678 | orchestrator | 2025-05-25 00:51:38.274689 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-25 00:51:38.274700 | orchestrator | Sunday 25 May 2025 00:49:48 +0000 (0:00:00.440) 0:00:23.635 ************ 2025-05-25 00:51:38.274712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 00:51:38.274731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 00:51:38.274754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 00:51:38.274767 | orchestrator | 2025-05-25 00:51:38.274778 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-25 00:51:38.274789 | orchestrator | Sunday 25 May 2025 00:49:50 +0000 (0:00:01.643) 0:00:25.278 ************ 2025-05-25 00:51:38.274800 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:51:38.274811 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:51:38.274822 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:51:38.274832 | orchestrator | 2025-05-25 00:51:38.274843 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-25 00:51:38.274854 | orchestrator | Sunday 25 May 2025 00:49:51 +0000 (0:00:01.024) 0:00:26.303 ************ 2025-05-25 00:51:38.274865 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:51:38.274876 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:51:38.274887 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:51:38.274898 | orchestrator | 2025-05-25 00:51:38.274909 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-25 00:51:38.274920 | orchestrator | Sunday 25 May 2025 00:49:58 +0000 (0:00:06.545) 0:00:32.849 ************ 2025-05-25 00:51:38.274930 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:51:38.274941 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:51:38.274952 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:51:38.274963 | orchestrator | 2025-05-25 00:51:38.274974 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-25 00:51:38.274985 | orchestrator | 2025-05-25 00:51:38.274996 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-25 00:51:38.275006 | orchestrator | Sunday 25 May 2025 00:49:58 +0000 (0:00:00.579) 0:00:33.428 ************ 2025-05-25 00:51:38.275017 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:51:38.275028 | orchestrator | 2025-05-25 00:51:38.275039 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-25 00:51:38.275056 | orchestrator | Sunday 25 May 2025 00:49:59 +0000 (0:00:00.900) 0:00:34.328 ************ 2025-05-25 00:51:38.275067 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:51:38.275078 | orchestrator | 2025-05-25 00:51:38.275089 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-25 00:51:38.275100 | orchestrator | Sunday 25 May 2025 00:49:59 +0000 (0:00:00.274) 0:00:34.603 ************ 2025-05-25 00:51:38.275111 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:51:38.275140 | orchestrator | 2025-05-25 00:51:38.275151 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-25 00:51:38.275162 | orchestrator | Sunday 25 May 2025 00:50:01 +0000 (0:00:02.111) 0:00:36.714 ************ 2025-05-25 00:51:38.275173 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:51:38.275184 | orchestrator | 2025-05-25 00:51:38.275195 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-25 00:51:38.275206 | orchestrator | 2025-05-25 00:51:38.275217 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-25 00:51:38.275228 | orchestrator | Sunday 25 May 2025 00:50:57 +0000 (0:00:55.506) 0:01:32.220 ************ 2025-05-25 00:51:38.275239 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:51:38.275249 | orchestrator | 2025-05-25 00:51:38.275260 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-25 00:51:38.275271 | orchestrator | Sunday 25 May 2025 00:50:58 +0000 (0:00:00.775) 0:01:32.995 ************ 2025-05-25 00:51:38.275282 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:51:38.275292 | orchestrator | 2025-05-25 00:51:38.275303 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-25 00:51:38.275314 | orchestrator | Sunday 25 May 2025 00:50:58 +0000 (0:00:00.367) 0:01:33.363 ************ 2025-05-25 00:51:38.275325 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:51:38.275336 | orchestrator | 2025-05-25 00:51:38.275346 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-25 00:51:38.275358 | orchestrator | Sunday 25 May 2025 00:51:00 +0000 (0:00:01.915) 0:01:35.279 ************ 2025-05-25 00:51:38.275368 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:51:38.275379 | orchestrator | 2025-05-25 00:51:38.275390 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-25 00:51:38.275401 | orchestrator | 2025-05-25 00:51:38.275412 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-25 00:51:38.275423 | orchestrator | Sunday 25 May 2025 00:51:15 +0000 (0:00:15.105) 0:01:50.385 ************ 2025-05-25 00:51:38.275434 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:51:38.275445 | orchestrator | 2025-05-25 00:51:38.276182 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-25 00:51:38.276221 | orchestrator | Sunday 25 May 2025 00:51:16 +0000 (0:00:00.626) 0:01:51.011 ************ 2025-05-25 00:51:38.276235 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:51:38.276248 | orchestrator | 2025-05-25 00:51:38.276260 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-25 00:51:38.276285 | orchestrator | Sunday 25 May 2025 00:51:16 +0000 (0:00:00.349) 0:01:51.360 ************ 2025-05-25 00:51:38.276298 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:51:38.276311 | orchestrator | 2025-05-25 00:51:38.276331 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-25 00:51:38.276350 | orchestrator | Sunday 25 May 2025 00:51:18 +0000 (0:00:01.765) 0:01:53.126 ************ 2025-05-25 00:51:38.276368 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:51:38.276386 | orchestrator | 2025-05-25 00:51:38.276406 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-25 00:51:38.276424 | orchestrator | 2025-05-25 00:51:38.276444 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-25 00:51:38.276463 | orchestrator | Sunday 25 May 2025 00:51:32 +0000 (0:00:13.888) 0:02:07.014 ************ 2025-05-25 00:51:38.276482 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:51:38.276508 | orchestrator | 2025-05-25 00:51:38.276519 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-25 00:51:38.276530 | orchestrator | Sunday 25 May 2025 00:51:32 +0000 (0:00:00.685) 0:02:07.699 ************ 2025-05-25 00:51:38.276541 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-25 00:51:38.276551 | orchestrator | enable_outward_rabbitmq_True 2025-05-25 00:51:38.276562 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-25 00:51:38.276573 | orchestrator | outward_rabbitmq_restart 2025-05-25 00:51:38.276583 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:51:38.276595 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:51:38.276605 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:51:38.276616 | orchestrator | 2025-05-25 00:51:38.276627 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-25 00:51:38.276637 | orchestrator | skipping: no hosts matched 2025-05-25 00:51:38.276648 | orchestrator | 2025-05-25 00:51:38.276659 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-25 00:51:38.276669 | orchestrator | skipping: no hosts matched 2025-05-25 00:51:38.276680 | orchestrator | 2025-05-25 00:51:38.276691 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-25 00:51:38.276702 | orchestrator | skipping: no hosts matched 2025-05-25 00:51:38.276712 | orchestrator | 2025-05-25 00:51:38.276723 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:51:38.276735 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-25 00:51:38.276747 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-25 00:51:38.276758 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:51:38.276769 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 00:51:38.276780 | orchestrator | 2025-05-25 00:51:38.276791 | orchestrator | 2025-05-25 00:51:38.276802 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:51:38.276812 | orchestrator | Sunday 25 May 2025 00:51:35 +0000 (0:00:02.776) 0:02:10.476 ************ 2025-05-25 00:51:38.276823 | orchestrator | =============================================================================== 2025-05-25 00:51:38.276834 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.50s 2025-05-25 00:51:38.276850 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.55s 2025-05-25 00:51:38.276861 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.79s 2025-05-25 00:51:38.276872 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.16s 2025-05-25 00:51:38.276882 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.78s 2025-05-25 00:51:38.276893 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.54s 2025-05-25 00:51:38.276903 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.30s 2025-05-25 00:51:38.276914 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.84s 2025-05-25 00:51:38.276925 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.82s 2025-05-25 00:51:38.276935 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.65s 2025-05-25 00:51:38.276946 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.64s 2025-05-25 00:51:38.276956 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.59s 2025-05-25 00:51:38.276967 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.54s 2025-05-25 00:51:38.276988 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.25s 2025-05-25 00:51:38.276999 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.07s 2025-05-25 00:51:38.277009 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.03s 2025-05-25 00:51:38.277020 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.99s 2025-05-25 00:51:38.277031 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.88s 2025-05-25 00:51:38.277041 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.75s 2025-05-25 00:51:38.277052 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.72s 2025-05-25 00:51:38.277069 | orchestrator | 2025-05-25 00:51:38 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:38.277081 | orchestrator | 2025-05-25 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:41.304674 | orchestrator | 2025-05-25 00:51:41 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:41.306967 | orchestrator | 2025-05-25 00:51:41 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:41.309552 | orchestrator | 2025-05-25 00:51:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:41.310631 | orchestrator | 2025-05-25 00:51:41 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:41.310983 | orchestrator | 2025-05-25 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:44.346874 | orchestrator | 2025-05-25 00:51:44 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:44.347396 | orchestrator | 2025-05-25 00:51:44 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:44.348576 | orchestrator | 2025-05-25 00:51:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:44.351207 | orchestrator | 2025-05-25 00:51:44 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:44.351236 | orchestrator | 2025-05-25 00:51:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:47.396248 | orchestrator | 2025-05-25 00:51:47 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:47.397039 | orchestrator | 2025-05-25 00:51:47 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:47.398302 | orchestrator | 2025-05-25 00:51:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:47.400103 | orchestrator | 2025-05-25 00:51:47 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:47.400163 | orchestrator | 2025-05-25 00:51:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:50.450836 | orchestrator | 2025-05-25 00:51:50 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:50.451066 | orchestrator | 2025-05-25 00:51:50 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:50.451947 | orchestrator | 2025-05-25 00:51:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:50.452546 | orchestrator | 2025-05-25 00:51:50 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:50.452574 | orchestrator | 2025-05-25 00:51:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:53.507456 | orchestrator | 2025-05-25 00:51:53 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:53.508670 | orchestrator | 2025-05-25 00:51:53 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:53.510500 | orchestrator | 2025-05-25 00:51:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:53.512032 | orchestrator | 2025-05-25 00:51:53 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:53.512653 | orchestrator | 2025-05-25 00:51:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:56.555467 | orchestrator | 2025-05-25 00:51:56 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:56.557496 | orchestrator | 2025-05-25 00:51:56 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:56.559993 | orchestrator | 2025-05-25 00:51:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:56.562476 | orchestrator | 2025-05-25 00:51:56 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:56.562689 | orchestrator | 2025-05-25 00:51:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:51:59.614931 | orchestrator | 2025-05-25 00:51:59 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:51:59.615042 | orchestrator | 2025-05-25 00:51:59 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:51:59.615639 | orchestrator | 2025-05-25 00:51:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:51:59.615666 | orchestrator | 2025-05-25 00:51:59 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:51:59.615680 | orchestrator | 2025-05-25 00:51:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:02.658552 | orchestrator | 2025-05-25 00:52:02 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:02.660584 | orchestrator | 2025-05-25 00:52:02 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:02.662358 | orchestrator | 2025-05-25 00:52:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:02.663975 | orchestrator | 2025-05-25 00:52:02 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:52:02.664011 | orchestrator | 2025-05-25 00:52:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:05.704472 | orchestrator | 2025-05-25 00:52:05 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:05.706629 | orchestrator | 2025-05-25 00:52:05 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:05.706687 | orchestrator | 2025-05-25 00:52:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:05.706703 | orchestrator | 2025-05-25 00:52:05 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:52:05.706718 | orchestrator | 2025-05-25 00:52:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:08.763246 | orchestrator | 2025-05-25 00:52:08 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:08.764374 | orchestrator | 2025-05-25 00:52:08 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:08.766685 | orchestrator | 2025-05-25 00:52:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:08.769778 | orchestrator | 2025-05-25 00:52:08 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:52:08.769821 | orchestrator | 2025-05-25 00:52:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:11.812990 | orchestrator | 2025-05-25 00:52:11 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:11.816284 | orchestrator | 2025-05-25 00:52:11 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:11.819233 | orchestrator | 2025-05-25 00:52:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:11.820818 | orchestrator | 2025-05-25 00:52:11 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:52:11.820844 | orchestrator | 2025-05-25 00:52:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:14.856480 | orchestrator | 2025-05-25 00:52:14 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:14.858059 | orchestrator | 2025-05-25 00:52:14 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:14.858852 | orchestrator | 2025-05-25 00:52:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:14.860422 | orchestrator | 2025-05-25 00:52:14 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:52:14.861355 | orchestrator | 2025-05-25 00:52:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:17.914328 | orchestrator | 2025-05-25 00:52:17 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:17.915781 | orchestrator | 2025-05-25 00:52:17 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:17.917433 | orchestrator | 2025-05-25 00:52:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:17.918828 | orchestrator | 2025-05-25 00:52:17 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:52:17.918857 | orchestrator | 2025-05-25 00:52:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:20.963989 | orchestrator | 2025-05-25 00:52:20 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:20.965801 | orchestrator | 2025-05-25 00:52:20 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:20.967418 | orchestrator | 2025-05-25 00:52:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:20.967993 | orchestrator | 2025-05-25 00:52:20 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:52:20.968016 | orchestrator | 2025-05-25 00:52:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:24.027695 | orchestrator | 2025-05-25 00:52:24 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:24.028233 | orchestrator | 2025-05-25 00:52:24 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:24.029536 | orchestrator | 2025-05-25 00:52:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:24.031566 | orchestrator | 2025-05-25 00:52:24 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:52:24.032671 | orchestrator | 2025-05-25 00:52:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:27.085997 | orchestrator | 2025-05-25 00:52:27 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:27.086204 | orchestrator | 2025-05-25 00:52:27 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:27.087236 | orchestrator | 2025-05-25 00:52:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:27.088365 | orchestrator | 2025-05-25 00:52:27 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:52:27.088392 | orchestrator | 2025-05-25 00:52:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:30.136214 | orchestrator | 2025-05-25 00:52:30 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:30.137728 | orchestrator | 2025-05-25 00:52:30 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:30.140969 | orchestrator | 2025-05-25 00:52:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:30.144004 | orchestrator | 2025-05-25 00:52:30 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:52:30.144136 | orchestrator | 2025-05-25 00:52:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:33.187509 | orchestrator | 2025-05-25 00:52:33 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:33.188901 | orchestrator | 2025-05-25 00:52:33 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:33.189993 | orchestrator | 2025-05-25 00:52:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:33.191086 | orchestrator | 2025-05-25 00:52:33 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:52:33.191112 | orchestrator | 2025-05-25 00:52:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:36.226428 | orchestrator | 2025-05-25 00:52:36 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:36.226627 | orchestrator | 2025-05-25 00:52:36 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:36.227987 | orchestrator | 2025-05-25 00:52:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:36.228484 | orchestrator | 2025-05-25 00:52:36 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state STARTED 2025-05-25 00:52:36.228554 | orchestrator | 2025-05-25 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:39.265596 | orchestrator | 2025-05-25 00:52:39 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:39.265786 | orchestrator | 2025-05-25 00:52:39 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:39.269366 | orchestrator | 2025-05-25 00:52:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:39.269982 | orchestrator | 2025-05-25 00:52:39.270154 | orchestrator | 2025-05-25 00:52:39 | INFO  | Task 09093579-79fc-40ce-8ff3-a6bf59d894ac is in state SUCCESS 2025-05-25 00:52:39.272694 | orchestrator | 2025-05-25 00:52:39.272736 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:52:39.272748 | orchestrator | 2025-05-25 00:52:39.272760 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:52:39.272772 | orchestrator | Sunday 25 May 2025 00:50:16 +0000 (0:00:00.241) 0:00:00.241 ************ 2025-05-25 00:52:39.272783 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:52:39.272795 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:52:39.272806 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:52:39.272816 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.272827 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.272838 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.272848 | orchestrator | 2025-05-25 00:52:39.272859 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:52:39.272870 | orchestrator | Sunday 25 May 2025 00:50:17 +0000 (0:00:00.679) 0:00:00.921 ************ 2025-05-25 00:52:39.272900 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-25 00:52:39.272912 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-25 00:52:39.272922 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-25 00:52:39.272933 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-25 00:52:39.272943 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-25 00:52:39.272953 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-25 00:52:39.272964 | orchestrator | 2025-05-25 00:52:39.272975 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-25 00:52:39.272985 | orchestrator | 2025-05-25 00:52:39.272995 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-25 00:52:39.273006 | orchestrator | Sunday 25 May 2025 00:50:18 +0000 (0:00:01.241) 0:00:02.162 ************ 2025-05-25 00:52:39.273018 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:52:39.273060 | orchestrator | 2025-05-25 00:52:39.273072 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-25 00:52:39.273083 | orchestrator | Sunday 25 May 2025 00:50:20 +0000 (0:00:01.298) 0:00:03.461 ************ 2025-05-25 00:52:39.273096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273121 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273213 | orchestrator | 2025-05-25 00:52:39.273224 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-25 00:52:39.273235 | orchestrator | Sunday 25 May 2025 00:50:21 +0000 (0:00:01.704) 0:00:05.165 ************ 2025-05-25 00:52:39.273247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273273 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273324 | orchestrator | 2025-05-25 00:52:39.273337 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-25 00:52:39.273349 | orchestrator | Sunday 25 May 2025 00:50:24 +0000 (0:00:02.691) 0:00:07.856 ************ 2025-05-25 00:52:39.273375 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273419 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273472 | orchestrator | 2025-05-25 00:52:39.273485 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-25 00:52:39.273497 | orchestrator | Sunday 25 May 2025 00:50:25 +0000 (0:00:01.063) 0:00:08.920 ************ 2025-05-25 00:52:39.273510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273536 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273606 | orchestrator | 2025-05-25 00:52:39.273617 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-25 00:52:39.273628 | orchestrator | Sunday 25 May 2025 00:50:27 +0000 (0:00:01.841) 0:00:10.761 ************ 2025-05-25 00:52:39.273638 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273649 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273660 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.273709 | orchestrator | 2025-05-25 00:52:39.273724 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-25 00:52:39.273735 | orchestrator | Sunday 25 May 2025 00:50:28 +0000 (0:00:01.293) 0:00:12.055 ************ 2025-05-25 00:52:39.273746 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:52:39.273756 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:52:39.273767 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:52:39.273778 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:52:39.273788 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:52:39.273799 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:52:39.273810 | orchestrator | 2025-05-25 00:52:39.273820 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-25 00:52:39.273831 | orchestrator | Sunday 25 May 2025 00:50:31 +0000 (0:00:02.779) 0:00:14.834 ************ 2025-05-25 00:52:39.273842 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-25 00:52:39.273853 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-25 00:52:39.273864 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-25 00:52:39.273879 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-25 00:52:39.273890 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-25 00:52:39.273900 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-25 00:52:39.273911 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-25 00:52:39.273922 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-25 00:52:39.273932 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-25 00:52:39.273942 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-25 00:52:39.273954 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-25 00:52:39.273965 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-25 00:52:39.273976 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-25 00:52:39.273986 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-25 00:52:39.273997 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-25 00:52:39.274008 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-25 00:52:39.274130 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-25 00:52:39.274151 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-25 00:52:39.274169 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-25 00:52:39.274187 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-25 00:52:39.274206 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-25 00:52:39.274225 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-25 00:52:39.274257 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-25 00:52:39.274269 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-25 00:52:39.274280 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-25 00:52:39.274290 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-25 00:52:39.274301 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-25 00:52:39.274311 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-25 00:52:39.274322 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-25 00:52:39.274332 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-25 00:52:39.274343 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-25 00:52:39.274353 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-25 00:52:39.274364 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-25 00:52:39.274380 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-25 00:52:39.274391 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-25 00:52:39.274402 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-25 00:52:39.274412 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-25 00:52:39.274423 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-25 00:52:39.274433 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-25 00:52:39.274444 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-25 00:52:39.274464 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-25 00:52:39.274481 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-25 00:52:39.274498 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-25 00:52:39.274516 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-25 00:52:39.274536 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-25 00:52:39.274554 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-25 00:52:39.274569 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-25 00:52:39.274580 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-25 00:52:39.274591 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-25 00:52:39.274602 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-25 00:52:39.274612 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-25 00:52:39.274631 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-25 00:52:39.274642 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-25 00:52:39.274653 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-25 00:52:39.274664 | orchestrator | 2025-05-25 00:52:39.274674 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-25 00:52:39.274685 | orchestrator | Sunday 25 May 2025 00:50:51 +0000 (0:00:20.341) 0:00:35.176 ************ 2025-05-25 00:52:39.274696 | orchestrator | 2025-05-25 00:52:39.274706 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-25 00:52:39.274717 | orchestrator | Sunday 25 May 2025 00:50:51 +0000 (0:00:00.061) 0:00:35.237 ************ 2025-05-25 00:52:39.274728 | orchestrator | 2025-05-25 00:52:39.274738 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-25 00:52:39.274749 | orchestrator | Sunday 25 May 2025 00:50:51 +0000 (0:00:00.056) 0:00:35.294 ************ 2025-05-25 00:52:39.274759 | orchestrator | 2025-05-25 00:52:39.274769 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-25 00:52:39.274780 | orchestrator | Sunday 25 May 2025 00:50:52 +0000 (0:00:00.290) 0:00:35.585 ************ 2025-05-25 00:52:39.274790 | orchestrator | 2025-05-25 00:52:39.274801 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-25 00:52:39.274811 | orchestrator | Sunday 25 May 2025 00:50:52 +0000 (0:00:00.053) 0:00:35.638 ************ 2025-05-25 00:52:39.274822 | orchestrator | 2025-05-25 00:52:39.274832 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-25 00:52:39.274843 | orchestrator | Sunday 25 May 2025 00:50:52 +0000 (0:00:00.053) 0:00:35.692 ************ 2025-05-25 00:52:39.274854 | orchestrator | 2025-05-25 00:52:39.274864 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-25 00:52:39.274874 | orchestrator | Sunday 25 May 2025 00:50:52 +0000 (0:00:00.067) 0:00:35.759 ************ 2025-05-25 00:52:39.274885 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:52:39.274896 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.274906 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.274917 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:52:39.274928 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:52:39.274938 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.274949 | orchestrator | 2025-05-25 00:52:39.274960 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-25 00:52:39.274970 | orchestrator | Sunday 25 May 2025 00:50:54 +0000 (0:00:02.084) 0:00:37.843 ************ 2025-05-25 00:52:39.274986 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:52:39.274997 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:52:39.275007 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:52:39.275018 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:52:39.275083 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:52:39.275097 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:52:39.275107 | orchestrator | 2025-05-25 00:52:39.275118 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-25 00:52:39.275129 | orchestrator | 2025-05-25 00:52:39.275139 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-25 00:52:39.275150 | orchestrator | Sunday 25 May 2025 00:51:18 +0000 (0:00:23.802) 0:01:01.646 ************ 2025-05-25 00:52:39.275161 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:52:39.275172 | orchestrator | 2025-05-25 00:52:39.275183 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-25 00:52:39.275193 | orchestrator | Sunday 25 May 2025 00:51:18 +0000 (0:00:00.519) 0:01:02.165 ************ 2025-05-25 00:52:39.275215 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:52:39.275226 | orchestrator | 2025-05-25 00:52:39.275246 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-25 00:52:39.275258 | orchestrator | Sunday 25 May 2025 00:51:19 +0000 (0:00:00.658) 0:01:02.823 ************ 2025-05-25 00:52:39.275269 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.275280 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.275291 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.275303 | orchestrator | 2025-05-25 00:52:39.275314 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-25 00:52:39.275325 | orchestrator | Sunday 25 May 2025 00:51:20 +0000 (0:00:00.836) 0:01:03.659 ************ 2025-05-25 00:52:39.275336 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.275347 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.275358 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.275369 | orchestrator | 2025-05-25 00:52:39.275380 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-25 00:52:39.275391 | orchestrator | Sunday 25 May 2025 00:51:20 +0000 (0:00:00.251) 0:01:03.911 ************ 2025-05-25 00:52:39.275403 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.275414 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.275425 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.275436 | orchestrator | 2025-05-25 00:52:39.275447 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-25 00:52:39.275458 | orchestrator | Sunday 25 May 2025 00:51:20 +0000 (0:00:00.345) 0:01:04.256 ************ 2025-05-25 00:52:39.275470 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.275480 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.275489 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.275499 | orchestrator | 2025-05-25 00:52:39.275509 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-25 00:52:39.275519 | orchestrator | Sunday 25 May 2025 00:51:21 +0000 (0:00:00.335) 0:01:04.592 ************ 2025-05-25 00:52:39.275529 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.275539 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.275549 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.275558 | orchestrator | 2025-05-25 00:52:39.275568 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-25 00:52:39.275578 | orchestrator | Sunday 25 May 2025 00:51:21 +0000 (0:00:00.242) 0:01:04.834 ************ 2025-05-25 00:52:39.275588 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.275598 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.275609 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.275618 | orchestrator | 2025-05-25 00:52:39.275628 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-25 00:52:39.275638 | orchestrator | Sunday 25 May 2025 00:51:21 +0000 (0:00:00.348) 0:01:05.182 ************ 2025-05-25 00:52:39.275648 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.275658 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.275668 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.275678 | orchestrator | 2025-05-25 00:52:39.275688 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-25 00:52:39.275697 | orchestrator | Sunday 25 May 2025 00:51:22 +0000 (0:00:00.333) 0:01:05.516 ************ 2025-05-25 00:52:39.275707 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.275717 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.275727 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.275737 | orchestrator | 2025-05-25 00:52:39.275747 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-25 00:52:39.275757 | orchestrator | Sunday 25 May 2025 00:51:22 +0000 (0:00:00.314) 0:01:05.831 ************ 2025-05-25 00:52:39.275767 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.275776 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.275794 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.275804 | orchestrator | 2025-05-25 00:52:39.275813 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-25 00:52:39.275823 | orchestrator | Sunday 25 May 2025 00:51:22 +0000 (0:00:00.249) 0:01:06.081 ************ 2025-05-25 00:52:39.275833 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.275843 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.275853 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.275862 | orchestrator | 2025-05-25 00:52:39.275872 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-25 00:52:39.275882 | orchestrator | Sunday 25 May 2025 00:51:23 +0000 (0:00:00.305) 0:01:06.386 ************ 2025-05-25 00:52:39.275892 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.275902 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.275912 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.275921 | orchestrator | 2025-05-25 00:52:39.275931 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-25 00:52:39.275942 | orchestrator | Sunday 25 May 2025 00:51:23 +0000 (0:00:00.327) 0:01:06.713 ************ 2025-05-25 00:52:39.275951 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.275961 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.275971 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.275981 | orchestrator | 2025-05-25 00:52:39.275996 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-25 00:52:39.276006 | orchestrator | Sunday 25 May 2025 00:51:23 +0000 (0:00:00.332) 0:01:07.046 ************ 2025-05-25 00:52:39.276016 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.276067 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.276079 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.276088 | orchestrator | 2025-05-25 00:52:39.276098 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-25 00:52:39.276108 | orchestrator | Sunday 25 May 2025 00:51:23 +0000 (0:00:00.248) 0:01:07.295 ************ 2025-05-25 00:52:39.276117 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.276127 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.276136 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.276145 | orchestrator | 2025-05-25 00:52:39.276155 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-25 00:52:39.276163 | orchestrator | Sunday 25 May 2025 00:51:24 +0000 (0:00:00.385) 0:01:07.680 ************ 2025-05-25 00:52:39.276171 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.276178 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.276186 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.276194 | orchestrator | 2025-05-25 00:52:39.276205 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-25 00:52:39.276214 | orchestrator | Sunday 25 May 2025 00:51:24 +0000 (0:00:00.522) 0:01:08.203 ************ 2025-05-25 00:52:39.276221 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.276229 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.276237 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.276245 | orchestrator | 2025-05-25 00:52:39.276253 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-25 00:52:39.276260 | orchestrator | Sunday 25 May 2025 00:51:25 +0000 (0:00:00.212) 0:01:08.415 ************ 2025-05-25 00:52:39.276268 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.276276 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.276284 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.276291 | orchestrator | 2025-05-25 00:52:39.276299 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-25 00:52:39.276307 | orchestrator | Sunday 25 May 2025 00:51:25 +0000 (0:00:00.482) 0:01:08.897 ************ 2025-05-25 00:52:39.276315 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:52:39.276328 | orchestrator | 2025-05-25 00:52:39.276336 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-25 00:52:39.276344 | orchestrator | Sunday 25 May 2025 00:51:26 +0000 (0:00:00.658) 0:01:09.556 ************ 2025-05-25 00:52:39.276352 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.276360 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.276367 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.276375 | orchestrator | 2025-05-25 00:52:39.276383 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-25 00:52:39.276391 | orchestrator | Sunday 25 May 2025 00:51:26 +0000 (0:00:00.324) 0:01:09.880 ************ 2025-05-25 00:52:39.276398 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.276406 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.276414 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.276422 | orchestrator | 2025-05-25 00:52:39.276430 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-25 00:52:39.276438 | orchestrator | Sunday 25 May 2025 00:51:27 +0000 (0:00:00.564) 0:01:10.444 ************ 2025-05-25 00:52:39.276445 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.276453 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.276461 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.276469 | orchestrator | 2025-05-25 00:52:39.276477 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-25 00:52:39.276485 | orchestrator | Sunday 25 May 2025 00:51:27 +0000 (0:00:00.399) 0:01:10.843 ************ 2025-05-25 00:52:39.276492 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.276500 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.276508 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.276516 | orchestrator | 2025-05-25 00:52:39.276524 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-25 00:52:39.276531 | orchestrator | Sunday 25 May 2025 00:51:27 +0000 (0:00:00.380) 0:01:11.224 ************ 2025-05-25 00:52:39.276539 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.276547 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.276555 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.276565 | orchestrator | 2025-05-25 00:52:39.276578 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-25 00:52:39.276591 | orchestrator | Sunday 25 May 2025 00:51:28 +0000 (0:00:00.326) 0:01:11.550 ************ 2025-05-25 00:52:39.276603 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.276616 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.276629 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.276641 | orchestrator | 2025-05-25 00:52:39.276649 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-25 00:52:39.276657 | orchestrator | Sunday 25 May 2025 00:51:28 +0000 (0:00:00.403) 0:01:11.953 ************ 2025-05-25 00:52:39.276665 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.276672 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.276680 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.276687 | orchestrator | 2025-05-25 00:52:39.276695 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-25 00:52:39.276703 | orchestrator | Sunday 25 May 2025 00:51:28 +0000 (0:00:00.337) 0:01:12.291 ************ 2025-05-25 00:52:39.276711 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.276718 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.276726 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.276734 | orchestrator | 2025-05-25 00:52:39.276741 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-25 00:52:39.276749 | orchestrator | Sunday 25 May 2025 00:51:29 +0000 (0:00:00.330) 0:01:12.622 ************ 2025-05-25 00:52:39.276762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276852 | orchestrator | 2025-05-25 00:52:39.276860 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-25 00:52:39.276867 | orchestrator | Sunday 25 May 2025 00:51:30 +0000 (0:00:01.395) 0:01:14.018 ************ 2025-05-25 00:52:39.276876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.276997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277013 | orchestrator | 2025-05-25 00:52:39.277021 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-25 00:52:39.277050 | orchestrator | Sunday 25 May 2025 00:51:35 +0000 (0:00:04.547) 0:01:18.565 ************ 2025-05-25 00:52:39.277059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277151 | orchestrator | 2025-05-25 00:52:39.277159 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-25 00:52:39.277167 | orchestrator | Sunday 25 May 2025 00:51:37 +0000 (0:00:02.459) 0:01:21.025 ************ 2025-05-25 00:52:39.277179 | orchestrator | 2025-05-25 00:52:39.277187 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-25 00:52:39.277195 | orchestrator | Sunday 25 May 2025 00:51:37 +0000 (0:00:00.061) 0:01:21.086 ************ 2025-05-25 00:52:39.277203 | orchestrator | 2025-05-25 00:52:39.277210 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-25 00:52:39.277218 | orchestrator | Sunday 25 May 2025 00:51:37 +0000 (0:00:00.058) 0:01:21.145 ************ 2025-05-25 00:52:39.277226 | orchestrator | 2025-05-25 00:52:39.277234 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-25 00:52:39.277241 | orchestrator | Sunday 25 May 2025 00:51:37 +0000 (0:00:00.055) 0:01:21.201 ************ 2025-05-25 00:52:39.277249 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:52:39.277257 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:52:39.277265 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:52:39.277272 | orchestrator | 2025-05-25 00:52:39.277280 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-25 00:52:39.277288 | orchestrator | Sunday 25 May 2025 00:51:45 +0000 (0:00:07.776) 0:01:28.977 ************ 2025-05-25 00:52:39.277295 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:52:39.277303 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:52:39.277315 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:52:39.277322 | orchestrator | 2025-05-25 00:52:39.277330 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-25 00:52:39.277338 | orchestrator | Sunday 25 May 2025 00:51:48 +0000 (0:00:03.199) 0:01:32.176 ************ 2025-05-25 00:52:39.277346 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:52:39.277353 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:52:39.277361 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:52:39.277369 | orchestrator | 2025-05-25 00:52:39.277376 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-25 00:52:39.277384 | orchestrator | Sunday 25 May 2025 00:51:56 +0000 (0:00:07.832) 0:01:40.009 ************ 2025-05-25 00:52:39.277391 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.277399 | orchestrator | 2025-05-25 00:52:39.277406 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-25 00:52:39.277414 | orchestrator | Sunday 25 May 2025 00:51:56 +0000 (0:00:00.130) 0:01:40.139 ************ 2025-05-25 00:52:39.277422 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.277430 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.277437 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.277445 | orchestrator | 2025-05-25 00:52:39.277456 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-25 00:52:39.277464 | orchestrator | Sunday 25 May 2025 00:51:57 +0000 (0:00:00.866) 0:01:41.006 ************ 2025-05-25 00:52:39.277472 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.277480 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.277487 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:52:39.277495 | orchestrator | 2025-05-25 00:52:39.277503 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-25 00:52:39.277511 | orchestrator | Sunday 25 May 2025 00:51:58 +0000 (0:00:00.717) 0:01:41.723 ************ 2025-05-25 00:52:39.277518 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.277526 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.277534 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.277542 | orchestrator | 2025-05-25 00:52:39.277549 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-25 00:52:39.277557 | orchestrator | Sunday 25 May 2025 00:51:59 +0000 (0:00:00.972) 0:01:42.695 ************ 2025-05-25 00:52:39.277564 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.277572 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.277580 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:52:39.277587 | orchestrator | 2025-05-25 00:52:39.277595 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-25 00:52:39.277607 | orchestrator | Sunday 25 May 2025 00:52:00 +0000 (0:00:00.653) 0:01:43.349 ************ 2025-05-25 00:52:39.277614 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.277622 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.277630 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.277638 | orchestrator | 2025-05-25 00:52:39.277646 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-25 00:52:39.277653 | orchestrator | Sunday 25 May 2025 00:52:01 +0000 (0:00:01.154) 0:01:44.504 ************ 2025-05-25 00:52:39.277661 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.277668 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.277676 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.277684 | orchestrator | 2025-05-25 00:52:39.277692 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-25 00:52:39.277699 | orchestrator | Sunday 25 May 2025 00:52:01 +0000 (0:00:00.682) 0:01:45.186 ************ 2025-05-25 00:52:39.277707 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.277715 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.277722 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.277730 | orchestrator | 2025-05-25 00:52:39.277738 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-25 00:52:39.277745 | orchestrator | Sunday 25 May 2025 00:52:02 +0000 (0:00:00.451) 0:01:45.637 ************ 2025-05-25 00:52:39.277754 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277762 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277770 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277778 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277790 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277798 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277810 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277824 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277832 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277840 | orchestrator | 2025-05-25 00:52:39.277848 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-25 00:52:39.277856 | orchestrator | Sunday 25 May 2025 00:52:03 +0000 (0:00:01.493) 0:01:47.131 ************ 2025-05-25 00:52:39.277864 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277872 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277880 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277888 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277920 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277949 | orchestrator | 2025-05-25 00:52:39.277957 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-25 00:52:39.277965 | orchestrator | Sunday 25 May 2025 00:52:07 +0000 (0:00:03.698) 0:01:50.829 ************ 2025-05-25 00:52:39.277973 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277981 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277989 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.277997 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.278005 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.278073 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.278150 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.278165 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.278173 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 00:52:39.278181 | orchestrator | 2025-05-25 00:52:39.278189 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-25 00:52:39.278197 | orchestrator | Sunday 25 May 2025 00:52:10 +0000 (0:00:03.137) 0:01:53.966 ************ 2025-05-25 00:52:39.278205 | orchestrator | 2025-05-25 00:52:39.278212 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-25 00:52:39.278220 | orchestrator | Sunday 25 May 2025 00:52:10 +0000 (0:00:00.103) 0:01:54.070 ************ 2025-05-25 00:52:39.278228 | orchestrator | 2025-05-25 00:52:39.278235 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-25 00:52:39.278243 | orchestrator | Sunday 25 May 2025 00:52:10 +0000 (0:00:00.244) 0:01:54.315 ************ 2025-05-25 00:52:39.278251 | orchestrator | 2025-05-25 00:52:39.278258 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-25 00:52:39.278266 | orchestrator | Sunday 25 May 2025 00:52:11 +0000 (0:00:00.068) 0:01:54.384 ************ 2025-05-25 00:52:39.278274 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:52:39.278281 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:52:39.278289 | orchestrator | 2025-05-25 00:52:39.278297 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-25 00:52:39.278304 | orchestrator | Sunday 25 May 2025 00:52:17 +0000 (0:00:06.265) 0:02:00.649 ************ 2025-05-25 00:52:39.278312 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:52:39.278320 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:52:39.278328 | orchestrator | 2025-05-25 00:52:39.278335 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-25 00:52:39.278343 | orchestrator | Sunday 25 May 2025 00:52:23 +0000 (0:00:06.329) 0:02:06.979 ************ 2025-05-25 00:52:39.278350 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:52:39.278358 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:52:39.278366 | orchestrator | 2025-05-25 00:52:39.278374 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-25 00:52:39.278381 | orchestrator | Sunday 25 May 2025 00:52:30 +0000 (0:00:06.465) 0:02:13.444 ************ 2025-05-25 00:52:39.278389 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:52:39.278397 | orchestrator | 2025-05-25 00:52:39.278404 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-25 00:52:39.278412 | orchestrator | Sunday 25 May 2025 00:52:30 +0000 (0:00:00.299) 0:02:13.743 ************ 2025-05-25 00:52:39.278419 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.278427 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.278435 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.278443 | orchestrator | 2025-05-25 00:52:39.278450 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-25 00:52:39.278462 | orchestrator | Sunday 25 May 2025 00:52:31 +0000 (0:00:00.786) 0:02:14.530 ************ 2025-05-25 00:52:39.278470 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.278478 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.278486 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:52:39.278493 | orchestrator | 2025-05-25 00:52:39.278501 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-25 00:52:39.278509 | orchestrator | Sunday 25 May 2025 00:52:31 +0000 (0:00:00.597) 0:02:15.128 ************ 2025-05-25 00:52:39.278516 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.278524 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.278532 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.278539 | orchestrator | 2025-05-25 00:52:39.278547 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-25 00:52:39.278555 | orchestrator | Sunday 25 May 2025 00:52:32 +0000 (0:00:00.942) 0:02:16.070 ************ 2025-05-25 00:52:39.278562 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:52:39.278570 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:52:39.278578 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:52:39.278585 | orchestrator | 2025-05-25 00:52:39.278593 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-25 00:52:39.278604 | orchestrator | Sunday 25 May 2025 00:52:33 +0000 (0:00:00.754) 0:02:16.824 ************ 2025-05-25 00:52:39.278613 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.278621 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.278628 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.278636 | orchestrator | 2025-05-25 00:52:39.278644 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-25 00:52:39.278656 | orchestrator | Sunday 25 May 2025 00:52:34 +0000 (0:00:00.704) 0:02:17.529 ************ 2025-05-25 00:52:39.278668 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:52:39.278681 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:52:39.278693 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:52:39.278706 | orchestrator | 2025-05-25 00:52:39.278719 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:52:39.278729 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-25 00:52:39.278738 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-25 00:52:39.278750 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-25 00:52:39.278759 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:52:39.278767 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:52:39.278774 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:52:39.278782 | orchestrator | 2025-05-25 00:52:39.278790 | orchestrator | 2025-05-25 00:52:39.278798 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:52:39.278806 | orchestrator | Sunday 25 May 2025 00:52:36 +0000 (0:00:01.816) 0:02:19.345 ************ 2025-05-25 00:52:39.278813 | orchestrator | =============================================================================== 2025-05-25 00:52:39.278821 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 23.80s 2025-05-25 00:52:39.278829 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.34s 2025-05-25 00:52:39.278836 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.30s 2025-05-25 00:52:39.278852 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.04s 2025-05-25 00:52:39.278860 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.53s 2025-05-25 00:52:39.278867 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.55s 2025-05-25 00:52:39.278875 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.70s 2025-05-25 00:52:39.278883 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.15s 2025-05-25 00:52:39.278890 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.78s 2025-05-25 00:52:39.278898 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.69s 2025-05-25 00:52:39.278906 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.46s 2025-05-25 00:52:39.278913 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.08s 2025-05-25 00:52:39.278921 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.84s 2025-05-25 00:52:39.278928 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.82s 2025-05-25 00:52:39.278936 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.70s 2025-05-25 00:52:39.278944 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.49s 2025-05-25 00:52:39.278951 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.40s 2025-05-25 00:52:39.278959 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.30s 2025-05-25 00:52:39.278967 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.29s 2025-05-25 00:52:39.278974 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.24s 2025-05-25 00:52:39.278982 | orchestrator | 2025-05-25 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:42.311875 | orchestrator | 2025-05-25 00:52:42 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:42.314425 | orchestrator | 2025-05-25 00:52:42 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:42.316063 | orchestrator | 2025-05-25 00:52:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:42.316088 | orchestrator | 2025-05-25 00:52:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:45.366256 | orchestrator | 2025-05-25 00:52:45 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:45.366370 | orchestrator | 2025-05-25 00:52:45 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:45.366405 | orchestrator | 2025-05-25 00:52:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:45.366418 | orchestrator | 2025-05-25 00:52:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:48.402989 | orchestrator | 2025-05-25 00:52:48 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:48.407658 | orchestrator | 2025-05-25 00:52:48 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:48.407708 | orchestrator | 2025-05-25 00:52:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:48.407721 | orchestrator | 2025-05-25 00:52:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:51.453901 | orchestrator | 2025-05-25 00:52:51 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:51.454485 | orchestrator | 2025-05-25 00:52:51 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:51.456251 | orchestrator | 2025-05-25 00:52:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:51.456317 | orchestrator | 2025-05-25 00:52:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:54.511212 | orchestrator | 2025-05-25 00:52:54 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:54.511982 | orchestrator | 2025-05-25 00:52:54 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:54.513715 | orchestrator | 2025-05-25 00:52:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:54.513744 | orchestrator | 2025-05-25 00:52:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:52:57.560863 | orchestrator | 2025-05-25 00:52:57 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:52:57.563729 | orchestrator | 2025-05-25 00:52:57 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:52:57.565645 | orchestrator | 2025-05-25 00:52:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:52:57.565672 | orchestrator | 2025-05-25 00:52:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:00.626668 | orchestrator | 2025-05-25 00:53:00 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:00.628736 | orchestrator | 2025-05-25 00:53:00 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:00.628771 | orchestrator | 2025-05-25 00:53:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:00.628779 | orchestrator | 2025-05-25 00:53:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:03.682908 | orchestrator | 2025-05-25 00:53:03 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:03.684361 | orchestrator | 2025-05-25 00:53:03 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:03.686379 | orchestrator | 2025-05-25 00:53:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:03.686812 | orchestrator | 2025-05-25 00:53:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:06.751733 | orchestrator | 2025-05-25 00:53:06 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:06.755265 | orchestrator | 2025-05-25 00:53:06 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:06.757443 | orchestrator | 2025-05-25 00:53:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:06.758395 | orchestrator | 2025-05-25 00:53:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:09.807826 | orchestrator | 2025-05-25 00:53:09 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:09.807927 | orchestrator | 2025-05-25 00:53:09 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:09.811346 | orchestrator | 2025-05-25 00:53:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:09.811373 | orchestrator | 2025-05-25 00:53:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:12.857469 | orchestrator | 2025-05-25 00:53:12 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:12.857807 | orchestrator | 2025-05-25 00:53:12 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:12.858272 | orchestrator | 2025-05-25 00:53:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:12.858332 | orchestrator | 2025-05-25 00:53:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:15.901178 | orchestrator | 2025-05-25 00:53:15 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:15.902532 | orchestrator | 2025-05-25 00:53:15 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:15.904203 | orchestrator | 2025-05-25 00:53:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:15.904320 | orchestrator | 2025-05-25 00:53:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:18.958681 | orchestrator | 2025-05-25 00:53:18 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:18.960610 | orchestrator | 2025-05-25 00:53:18 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:18.962610 | orchestrator | 2025-05-25 00:53:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:18.963121 | orchestrator | 2025-05-25 00:53:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:22.011616 | orchestrator | 2025-05-25 00:53:22 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:22.015128 | orchestrator | 2025-05-25 00:53:22 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:22.017819 | orchestrator | 2025-05-25 00:53:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:22.018053 | orchestrator | 2025-05-25 00:53:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:25.066781 | orchestrator | 2025-05-25 00:53:25 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:25.068053 | orchestrator | 2025-05-25 00:53:25 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:25.069526 | orchestrator | 2025-05-25 00:53:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:25.069799 | orchestrator | 2025-05-25 00:53:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:28.121886 | orchestrator | 2025-05-25 00:53:28 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:28.122236 | orchestrator | 2025-05-25 00:53:28 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:28.123212 | orchestrator | 2025-05-25 00:53:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:28.123238 | orchestrator | 2025-05-25 00:53:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:31.168338 | orchestrator | 2025-05-25 00:53:31 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:31.169917 | orchestrator | 2025-05-25 00:53:31 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:31.171421 | orchestrator | 2025-05-25 00:53:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:31.171461 | orchestrator | 2025-05-25 00:53:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:34.224099 | orchestrator | 2025-05-25 00:53:34 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:34.225810 | orchestrator | 2025-05-25 00:53:34 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:34.227331 | orchestrator | 2025-05-25 00:53:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:34.227370 | orchestrator | 2025-05-25 00:53:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:37.285254 | orchestrator | 2025-05-25 00:53:37 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:37.286697 | orchestrator | 2025-05-25 00:53:37 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:37.288759 | orchestrator | 2025-05-25 00:53:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:37.288796 | orchestrator | 2025-05-25 00:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:40.350542 | orchestrator | 2025-05-25 00:53:40 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:40.353292 | orchestrator | 2025-05-25 00:53:40 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:40.355648 | orchestrator | 2025-05-25 00:53:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:40.355774 | orchestrator | 2025-05-25 00:53:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:43.406484 | orchestrator | 2025-05-25 00:53:43 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:43.406592 | orchestrator | 2025-05-25 00:53:43 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:43.409113 | orchestrator | 2025-05-25 00:53:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:43.409208 | orchestrator | 2025-05-25 00:53:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:46.457409 | orchestrator | 2025-05-25 00:53:46 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:46.458737 | orchestrator | 2025-05-25 00:53:46 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:46.462497 | orchestrator | 2025-05-25 00:53:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:46.462551 | orchestrator | 2025-05-25 00:53:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:49.504763 | orchestrator | 2025-05-25 00:53:49 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:49.505824 | orchestrator | 2025-05-25 00:53:49 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:49.507055 | orchestrator | 2025-05-25 00:53:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:49.507109 | orchestrator | 2025-05-25 00:53:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:52.544668 | orchestrator | 2025-05-25 00:53:52 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:52.546836 | orchestrator | 2025-05-25 00:53:52 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:52.546873 | orchestrator | 2025-05-25 00:53:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:52.546886 | orchestrator | 2025-05-25 00:53:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:55.589319 | orchestrator | 2025-05-25 00:53:55 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:55.590633 | orchestrator | 2025-05-25 00:53:55 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:55.592453 | orchestrator | 2025-05-25 00:53:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:55.592489 | orchestrator | 2025-05-25 00:53:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:53:58.635728 | orchestrator | 2025-05-25 00:53:58 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:53:58.635881 | orchestrator | 2025-05-25 00:53:58 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:53:58.636724 | orchestrator | 2025-05-25 00:53:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:53:58.636749 | orchestrator | 2025-05-25 00:53:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:01.674709 | orchestrator | 2025-05-25 00:54:01 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:01.678210 | orchestrator | 2025-05-25 00:54:01 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:01.678749 | orchestrator | 2025-05-25 00:54:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:01.678784 | orchestrator | 2025-05-25 00:54:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:04.719852 | orchestrator | 2025-05-25 00:54:04 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:04.721796 | orchestrator | 2025-05-25 00:54:04 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:04.723564 | orchestrator | 2025-05-25 00:54:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:04.724339 | orchestrator | 2025-05-25 00:54:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:07.769660 | orchestrator | 2025-05-25 00:54:07 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:07.770528 | orchestrator | 2025-05-25 00:54:07 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:07.770672 | orchestrator | 2025-05-25 00:54:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:07.770772 | orchestrator | 2025-05-25 00:54:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:10.811098 | orchestrator | 2025-05-25 00:54:10 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:10.811307 | orchestrator | 2025-05-25 00:54:10 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:10.812243 | orchestrator | 2025-05-25 00:54:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:10.812301 | orchestrator | 2025-05-25 00:54:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:13.863596 | orchestrator | 2025-05-25 00:54:13 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:13.863716 | orchestrator | 2025-05-25 00:54:13 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:13.863732 | orchestrator | 2025-05-25 00:54:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:13.863743 | orchestrator | 2025-05-25 00:54:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:16.895743 | orchestrator | 2025-05-25 00:54:16 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:16.897807 | orchestrator | 2025-05-25 00:54:16 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:16.900053 | orchestrator | 2025-05-25 00:54:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:16.900392 | orchestrator | 2025-05-25 00:54:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:19.941324 | orchestrator | 2025-05-25 00:54:19 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:19.941470 | orchestrator | 2025-05-25 00:54:19 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:19.941628 | orchestrator | 2025-05-25 00:54:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:19.941649 | orchestrator | 2025-05-25 00:54:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:22.978941 | orchestrator | 2025-05-25 00:54:22 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:22.980205 | orchestrator | 2025-05-25 00:54:22 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:22.981691 | orchestrator | 2025-05-25 00:54:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:22.981731 | orchestrator | 2025-05-25 00:54:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:26.028492 | orchestrator | 2025-05-25 00:54:26 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:26.029446 | orchestrator | 2025-05-25 00:54:26 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:26.031140 | orchestrator | 2025-05-25 00:54:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:26.031175 | orchestrator | 2025-05-25 00:54:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:29.087110 | orchestrator | 2025-05-25 00:54:29 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:29.089533 | orchestrator | 2025-05-25 00:54:29 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:29.092407 | orchestrator | 2025-05-25 00:54:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:29.092866 | orchestrator | 2025-05-25 00:54:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:32.140572 | orchestrator | 2025-05-25 00:54:32 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:32.142010 | orchestrator | 2025-05-25 00:54:32 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:32.143379 | orchestrator | 2025-05-25 00:54:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:32.144065 | orchestrator | 2025-05-25 00:54:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:35.179360 | orchestrator | 2025-05-25 00:54:35 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:35.179605 | orchestrator | 2025-05-25 00:54:35 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:35.180439 | orchestrator | 2025-05-25 00:54:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:35.183003 | orchestrator | 2025-05-25 00:54:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:38.230270 | orchestrator | 2025-05-25 00:54:38 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:38.233360 | orchestrator | 2025-05-25 00:54:38 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:38.235011 | orchestrator | 2025-05-25 00:54:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:38.235613 | orchestrator | 2025-05-25 00:54:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:41.288296 | orchestrator | 2025-05-25 00:54:41 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:41.290146 | orchestrator | 2025-05-25 00:54:41 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:41.291957 | orchestrator | 2025-05-25 00:54:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:41.291994 | orchestrator | 2025-05-25 00:54:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:44.347062 | orchestrator | 2025-05-25 00:54:44 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:44.347175 | orchestrator | 2025-05-25 00:54:44 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:44.351160 | orchestrator | 2025-05-25 00:54:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:44.351255 | orchestrator | 2025-05-25 00:54:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:47.401347 | orchestrator | 2025-05-25 00:54:47 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:47.403211 | orchestrator | 2025-05-25 00:54:47 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:47.405397 | orchestrator | 2025-05-25 00:54:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:47.405448 | orchestrator | 2025-05-25 00:54:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:50.455210 | orchestrator | 2025-05-25 00:54:50 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:50.457614 | orchestrator | 2025-05-25 00:54:50 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:50.459921 | orchestrator | 2025-05-25 00:54:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:50.459977 | orchestrator | 2025-05-25 00:54:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:53.525338 | orchestrator | 2025-05-25 00:54:53 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:53.527144 | orchestrator | 2025-05-25 00:54:53 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:53.531214 | orchestrator | 2025-05-25 00:54:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:53.531253 | orchestrator | 2025-05-25 00:54:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:56.586251 | orchestrator | 2025-05-25 00:54:56 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:56.588857 | orchestrator | 2025-05-25 00:54:56 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:56.588947 | orchestrator | 2025-05-25 00:54:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:56.588999 | orchestrator | 2025-05-25 00:54:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:54:59.633434 | orchestrator | 2025-05-25 00:54:59 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:54:59.634702 | orchestrator | 2025-05-25 00:54:59 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:54:59.636251 | orchestrator | 2025-05-25 00:54:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:54:59.636402 | orchestrator | 2025-05-25 00:54:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:02.695362 | orchestrator | 2025-05-25 00:55:02 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:02.696580 | orchestrator | 2025-05-25 00:55:02 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:02.699079 | orchestrator | 2025-05-25 00:55:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:02.699358 | orchestrator | 2025-05-25 00:55:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:05.753602 | orchestrator | 2025-05-25 00:55:05 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:05.755659 | orchestrator | 2025-05-25 00:55:05 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:05.758117 | orchestrator | 2025-05-25 00:55:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:05.758337 | orchestrator | 2025-05-25 00:55:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:08.820481 | orchestrator | 2025-05-25 00:55:08 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:08.821555 | orchestrator | 2025-05-25 00:55:08 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:08.823410 | orchestrator | 2025-05-25 00:55:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:08.823448 | orchestrator | 2025-05-25 00:55:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:11.868077 | orchestrator | 2025-05-25 00:55:11 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:11.870996 | orchestrator | 2025-05-25 00:55:11 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:11.872632 | orchestrator | 2025-05-25 00:55:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:11.872670 | orchestrator | 2025-05-25 00:55:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:14.931017 | orchestrator | 2025-05-25 00:55:14 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:14.932734 | orchestrator | 2025-05-25 00:55:14 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:14.934441 | orchestrator | 2025-05-25 00:55:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:14.934480 | orchestrator | 2025-05-25 00:55:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:17.979054 | orchestrator | 2025-05-25 00:55:17 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:17.980075 | orchestrator | 2025-05-25 00:55:17 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:17.982928 | orchestrator | 2025-05-25 00:55:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:17.983321 | orchestrator | 2025-05-25 00:55:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:21.040536 | orchestrator | 2025-05-25 00:55:21 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:21.040660 | orchestrator | 2025-05-25 00:55:21 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:21.044288 | orchestrator | 2025-05-25 00:55:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:21.044374 | orchestrator | 2025-05-25 00:55:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:24.110344 | orchestrator | 2025-05-25 00:55:24 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:24.112166 | orchestrator | 2025-05-25 00:55:24 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:24.116361 | orchestrator | 2025-05-25 00:55:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:24.116398 | orchestrator | 2025-05-25 00:55:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:27.162697 | orchestrator | 2025-05-25 00:55:27 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:27.163352 | orchestrator | 2025-05-25 00:55:27 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:27.166219 | orchestrator | 2025-05-25 00:55:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:27.166257 | orchestrator | 2025-05-25 00:55:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:30.228663 | orchestrator | 2025-05-25 00:55:30 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:30.230402 | orchestrator | 2025-05-25 00:55:30 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:30.232130 | orchestrator | 2025-05-25 00:55:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:30.232162 | orchestrator | 2025-05-25 00:55:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:33.279568 | orchestrator | 2025-05-25 00:55:33 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:33.280575 | orchestrator | 2025-05-25 00:55:33 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:33.282330 | orchestrator | 2025-05-25 00:55:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:33.282410 | orchestrator | 2025-05-25 00:55:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:36.331017 | orchestrator | 2025-05-25 00:55:36 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:36.331150 | orchestrator | 2025-05-25 00:55:36 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:36.333660 | orchestrator | 2025-05-25 00:55:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:36.333779 | orchestrator | 2025-05-25 00:55:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:39.388634 | orchestrator | 2025-05-25 00:55:39 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:39.391311 | orchestrator | 2025-05-25 00:55:39 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:39.394205 | orchestrator | 2025-05-25 00:55:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:39.394273 | orchestrator | 2025-05-25 00:55:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:42.449281 | orchestrator | 2025-05-25 00:55:42 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:42.452595 | orchestrator | 2025-05-25 00:55:42 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:42.454004 | orchestrator | 2025-05-25 00:55:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:42.454090 | orchestrator | 2025-05-25 00:55:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:45.503263 | orchestrator | 2025-05-25 00:55:45 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:45.503583 | orchestrator | 2025-05-25 00:55:45 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:45.505768 | orchestrator | 2025-05-25 00:55:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:45.505842 | orchestrator | 2025-05-25 00:55:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:48.548655 | orchestrator | 2025-05-25 00:55:48 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:48.548805 | orchestrator | 2025-05-25 00:55:48 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:48.548822 | orchestrator | 2025-05-25 00:55:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:48.548835 | orchestrator | 2025-05-25 00:55:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:51.592074 | orchestrator | 2025-05-25 00:55:51 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:51.593254 | orchestrator | 2025-05-25 00:55:51 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:51.596509 | orchestrator | 2025-05-25 00:55:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:51.597194 | orchestrator | 2025-05-25 00:55:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:54.642411 | orchestrator | 2025-05-25 00:55:54 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:54.645241 | orchestrator | 2025-05-25 00:55:54 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:54.649315 | orchestrator | 2025-05-25 00:55:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:54.649345 | orchestrator | 2025-05-25 00:55:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:55:57.697218 | orchestrator | 2025-05-25 00:55:57 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state STARTED 2025-05-25 00:55:57.699009 | orchestrator | 2025-05-25 00:55:57 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:55:57.701636 | orchestrator | 2025-05-25 00:55:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:55:57.703586 | orchestrator | 2025-05-25 00:55:57 | INFO  | Task 7c35bbda-7106-4936-8a63-3e06a9b50752 is in state STARTED 2025-05-25 00:55:57.704574 | orchestrator | 2025-05-25 00:55:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:00.766523 | orchestrator | 2025-05-25 00:56:00 | INFO  | Task fe7a6be2-2ea8-4b88-8037-9aa1f6b63ee7 is in state SUCCESS 2025-05-25 00:56:00.769038 | orchestrator | 2025-05-25 00:56:00.769098 | orchestrator | 2025-05-25 00:56:00.769110 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:56:00.769122 | orchestrator | 2025-05-25 00:56:00.769133 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:56:00.769145 | orchestrator | Sunday 25 May 2025 00:49:03 +0000 (0:00:00.446) 0:00:00.446 ************ 2025-05-25 00:56:00.769156 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.769168 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.769179 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.769190 | orchestrator | 2025-05-25 00:56:00.769201 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:56:00.769213 | orchestrator | Sunday 25 May 2025 00:49:04 +0000 (0:00:00.636) 0:00:01.083 ************ 2025-05-25 00:56:00.769224 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-25 00:56:00.769235 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-25 00:56:00.769246 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-25 00:56:00.769257 | orchestrator | 2025-05-25 00:56:00.769268 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-25 00:56:00.769278 | orchestrator | 2025-05-25 00:56:00.769289 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-25 00:56:00.769300 | orchestrator | Sunday 25 May 2025 00:49:05 +0000 (0:00:00.648) 0:00:01.732 ************ 2025-05-25 00:56:00.769351 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.769364 | orchestrator | 2025-05-25 00:56:00.769376 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-25 00:56:00.769388 | orchestrator | Sunday 25 May 2025 00:49:06 +0000 (0:00:00.793) 0:00:02.525 ************ 2025-05-25 00:56:00.769399 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.769411 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.769422 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.769441 | orchestrator | 2025-05-25 00:56:00.769453 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-25 00:56:00.769464 | orchestrator | Sunday 25 May 2025 00:49:07 +0000 (0:00:01.325) 0:00:03.851 ************ 2025-05-25 00:56:00.769476 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.769487 | orchestrator | 2025-05-25 00:56:00.769499 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-25 00:56:00.769511 | orchestrator | Sunday 25 May 2025 00:49:08 +0000 (0:00:00.988) 0:00:04.839 ************ 2025-05-25 00:56:00.769522 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.769534 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.769545 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.769557 | orchestrator | 2025-05-25 00:56:00.769568 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-25 00:56:00.769580 | orchestrator | Sunday 25 May 2025 00:49:09 +0000 (0:00:01.401) 0:00:06.241 ************ 2025-05-25 00:56:00.769591 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-25 00:56:00.769603 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-25 00:56:00.769617 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-25 00:56:00.769630 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-25 00:56:00.769644 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-25 00:56:00.769658 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-25 00:56:00.769673 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-25 00:56:00.769686 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-25 00:56:00.769730 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-25 00:56:00.769743 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-25 00:56:00.769755 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-25 00:56:00.769767 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-25 00:56:00.769780 | orchestrator | 2025-05-25 00:56:00.769792 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-25 00:56:00.769805 | orchestrator | Sunday 25 May 2025 00:49:13 +0000 (0:00:03.776) 0:00:10.017 ************ 2025-05-25 00:56:00.769818 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-25 00:56:00.769831 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-25 00:56:00.769844 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-25 00:56:00.769865 | orchestrator | 2025-05-25 00:56:00.769878 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-25 00:56:00.769897 | orchestrator | Sunday 25 May 2025 00:49:14 +0000 (0:00:00.944) 0:00:10.962 ************ 2025-05-25 00:56:00.769909 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-25 00:56:00.769922 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-25 00:56:00.769940 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-25 00:56:00.769961 | orchestrator | 2025-05-25 00:56:00.769974 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-25 00:56:00.769985 | orchestrator | Sunday 25 May 2025 00:49:15 +0000 (0:00:01.482) 0:00:12.444 ************ 2025-05-25 00:56:00.770003 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-25 00:56:00.770014 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.770099 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-25 00:56:00.770111 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.770128 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-25 00:56:00.770140 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.770150 | orchestrator | 2025-05-25 00:56:00.770161 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-25 00:56:00.770172 | orchestrator | Sunday 25 May 2025 00:49:16 +0000 (0:00:00.588) 0:00:13.033 ************ 2025-05-25 00:56:00.770229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.770249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.770262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.770274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.770286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.770320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.770333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 00:56:00.770345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.770357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 00:56:00.770369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 00:56:00.770380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.770403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.770414 | orchestrator | 2025-05-25 00:56:00.770426 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-25 00:56:00.770437 | orchestrator | Sunday 25 May 2025 00:49:19 +0000 (0:00:02.601) 0:00:15.635 ************ 2025-05-25 00:56:00.770448 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.770459 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.770470 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.770480 | orchestrator | 2025-05-25 00:56:00.770504 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-25 00:56:00.770515 | orchestrator | Sunday 25 May 2025 00:49:21 +0000 (0:00:02.299) 0:00:17.934 ************ 2025-05-25 00:56:00.770526 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-25 00:56:00.770537 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-25 00:56:00.770548 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-25 00:56:00.770559 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-25 00:56:00.770569 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-25 00:56:00.770580 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-25 00:56:00.770591 | orchestrator | 2025-05-25 00:56:00.770601 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-25 00:56:00.770612 | orchestrator | Sunday 25 May 2025 00:49:24 +0000 (0:00:02.963) 0:00:20.898 ************ 2025-05-25 00:56:00.770623 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.770634 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.770644 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.770655 | orchestrator | 2025-05-25 00:56:00.770666 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-25 00:56:00.770676 | orchestrator | Sunday 25 May 2025 00:49:26 +0000 (0:00:01.599) 0:00:22.497 ************ 2025-05-25 00:56:00.770687 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.770719 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.770730 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.770741 | orchestrator | 2025-05-25 00:56:00.770752 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-25 00:56:00.770763 | orchestrator | Sunday 25 May 2025 00:49:28 +0000 (0:00:02.159) 0:00:24.657 ************ 2025-05-25 00:56:00.770774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-25 00:56:00.770786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-25 00:56:00.770805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-25 00:56:00.770822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 00:56:00.770840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 00:56:00.770852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 00:56:00.770864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 00:56:00.770875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 00:56:00.770924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.770941 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.770952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.770963 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.770986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 00:56:00.771005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.771016 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.771027 | orchestrator | 2025-05-25 00:56:00.771038 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-25 00:56:00.771049 | orchestrator | Sunday 25 May 2025 00:49:30 +0000 (0:00:01.838) 0:00:26.496 ************ 2025-05-25 00:56:00.771061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.771073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.771090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.771102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.771138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.771151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 00:56:00.771163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.771174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 00:56:00.771191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.771203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 00:56:00.771231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.771269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.771281 | orchestrator | 2025-05-25 00:56:00.771298 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-25 00:56:00.771309 | orchestrator | Sunday 25 May 2025 00:49:34 +0000 (0:00:04.034) 0:00:30.530 ************ 2025-05-25 00:56:00.771321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.771332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.771350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.771361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.771378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.771398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.771410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 00:56:00.771421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.771439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 00:56:00.771451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 00:56:00.771462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.771478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.771490 | orchestrator | 2025-05-25 00:56:00.771501 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-25 00:56:00.771519 | orchestrator | Sunday 25 May 2025 00:49:36 +0000 (0:00:02.907) 0:00:33.438 ************ 2025-05-25 00:56:00.771542 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-25 00:56:00.771554 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-25 00:56:00.771565 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-25 00:56:00.771576 | orchestrator | 2025-05-25 00:56:00.771586 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-25 00:56:00.771597 | orchestrator | Sunday 25 May 2025 00:49:39 +0000 (0:00:02.373) 0:00:35.812 ************ 2025-05-25 00:56:00.771608 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-25 00:56:00.771632 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-25 00:56:00.771643 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-25 00:56:00.771654 | orchestrator | 2025-05-25 00:56:00.771665 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-25 00:56:00.771676 | orchestrator | Sunday 25 May 2025 00:49:43 +0000 (0:00:04.006) 0:00:39.819 ************ 2025-05-25 00:56:00.771687 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.771769 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.771781 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.771792 | orchestrator | 2025-05-25 00:56:00.771809 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-25 00:56:00.771820 | orchestrator | Sunday 25 May 2025 00:49:44 +0000 (0:00:01.087) 0:00:40.907 ************ 2025-05-25 00:56:00.771831 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-25 00:56:00.771844 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-25 00:56:00.771855 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-25 00:56:00.771866 | orchestrator | 2025-05-25 00:56:00.771877 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-25 00:56:00.771887 | orchestrator | Sunday 25 May 2025 00:49:46 +0000 (0:00:02.152) 0:00:43.059 ************ 2025-05-25 00:56:00.771898 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-25 00:56:00.771909 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-25 00:56:00.771934 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-25 00:56:00.771945 | orchestrator | 2025-05-25 00:56:00.771976 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-25 00:56:00.771987 | orchestrator | Sunday 25 May 2025 00:49:48 +0000 (0:00:02.289) 0:00:45.349 ************ 2025-05-25 00:56:00.771998 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-25 00:56:00.772009 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-25 00:56:00.772020 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-25 00:56:00.772031 | orchestrator | 2025-05-25 00:56:00.772041 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-25 00:56:00.772058 | orchestrator | Sunday 25 May 2025 00:49:50 +0000 (0:00:02.050) 0:00:47.399 ************ 2025-05-25 00:56:00.772069 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-25 00:56:00.772080 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-25 00:56:00.772091 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-25 00:56:00.772102 | orchestrator | 2025-05-25 00:56:00.772113 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-25 00:56:00.772123 | orchestrator | Sunday 25 May 2025 00:49:52 +0000 (0:00:02.002) 0:00:49.402 ************ 2025-05-25 00:56:00.772134 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.772145 | orchestrator | 2025-05-25 00:56:00.772156 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-25 00:56:00.772166 | orchestrator | Sunday 25 May 2025 00:49:53 +0000 (0:00:00.749) 0:00:50.151 ************ 2025-05-25 00:56:00.772183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.772212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.772225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.772237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.772248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.772260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.772271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 00:56:00.772297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 00:56:00.772308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 00:56:00.772318 | orchestrator | 2025-05-25 00:56:00.772328 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-25 00:56:00.772338 | orchestrator | Sunday 25 May 2025 00:49:56 +0000 (0:00:03.161) 0:00:53.312 ************ 2025-05-25 00:56:00.772348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-25 00:56:00.772358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 00:56:00.772368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 00:56:00.772378 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.772389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-25 00:56:00.772408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 00:56:00.772425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 00:56:00.772436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-25 00:56:00.772446 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.772456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 00:56:00.772466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 00:56:00.772476 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.772485 | orchestrator | 2025-05-25 00:56:00.772495 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-25 00:56:00.772505 | orchestrator | Sunday 25 May 2025 00:49:57 +0000 (0:00:00.872) 0:00:54.185 ************ 2025-05-25 00:56:00.772515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-25 00:56:00.772547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 00:56:00.772563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 00:56:00.772574 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.772584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-25 00:56:00.772594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 00:56:00.772605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 00:56:00.772615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-25 00:56:00.772630 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.772657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-25 00:56:00.772675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-25 00:56:00.772685 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.772714 | orchestrator | 2025-05-25 00:56:00.772724 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-25 00:56:00.772734 | orchestrator | Sunday 25 May 2025 00:49:58 +0000 (0:00:01.271) 0:00:55.456 ************ 2025-05-25 00:56:00.772749 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-25 00:56:00.772759 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-25 00:56:00.772769 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-25 00:56:00.772778 | orchestrator | 2025-05-25 00:56:00.772788 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-25 00:56:00.772798 | orchestrator | Sunday 25 May 2025 00:50:01 +0000 (0:00:02.659) 0:00:58.116 ************ 2025-05-25 00:56:00.772807 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-25 00:56:00.772817 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-25 00:56:00.772826 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-25 00:56:00.772836 | orchestrator | 2025-05-25 00:56:00.772845 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-25 00:56:00.772855 | orchestrator | Sunday 25 May 2025 00:50:03 +0000 (0:00:02.131) 0:01:00.247 ************ 2025-05-25 00:56:00.772864 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-25 00:56:00.772874 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-25 00:56:00.772884 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-25 00:56:00.772893 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-25 00:56:00.772903 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.772912 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-25 00:56:00.772922 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.772931 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-25 00:56:00.772941 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.772956 | orchestrator | 2025-05-25 00:56:00.772966 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-25 00:56:00.772982 | orchestrator | Sunday 25 May 2025 00:50:05 +0000 (0:00:01.308) 0:01:01.556 ************ 2025-05-25 00:56:00.772992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.773003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.773018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-25 00:56:00.773044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.773055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.773065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-25 00:56:00.773082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 00:56:00.773092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 00:56:00.773102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-25 00:56:00.773117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.773135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.773145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e', '__omit_place_holder__8e7160423617d6b0c7fae0a8380c21fa6a02c52e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-25 00:56:00.773161 | orchestrator | 2025-05-25 00:56:00.773171 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-25 00:56:00.773181 | orchestrator | Sunday 25 May 2025 00:50:07 +0000 (0:00:02.835) 0:01:04.391 ************ 2025-05-25 00:56:00.773191 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.773201 | orchestrator | 2025-05-25 00:56:00.773210 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-25 00:56:00.773220 | orchestrator | Sunday 25 May 2025 00:50:08 +0000 (0:00:00.848) 0:01:05.240 ************ 2025-05-25 00:56:00.773230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-25 00:56:00.773242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.773257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-25 00:56:00.773273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.773284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.773304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.773321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-25 00:56:00.773338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.773382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.773409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.773424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.773442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.773452 | orchestrator | 2025-05-25 00:56:00.773462 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-25 00:56:00.773472 | orchestrator | Sunday 25 May 2025 00:50:12 +0000 (0:00:04.222) 0:01:09.462 ************ 2025-05-25 00:56:00.773482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-25 00:56:00.773492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.773502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.773522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.773532 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.773543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-25 00:56:00.773559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.773570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.773580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.773589 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.773603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-25 00:56:00.773621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.773631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.773646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.773656 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.773666 | orchestrator | 2025-05-25 00:56:00.773676 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-25 00:56:00.773685 | orchestrator | Sunday 25 May 2025 00:50:13 +0000 (0:00:00.917) 0:01:10.380 ************ 2025-05-25 00:56:00.773714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-25 00:56:00.773726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-25 00:56:00.773737 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.773746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-25 00:56:00.773763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-25 00:56:00.773773 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.773783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-25 00:56:00.773799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-25 00:56:00.773809 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.773818 | orchestrator | 2025-05-25 00:56:00.773835 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-25 00:56:00.773845 | orchestrator | Sunday 25 May 2025 00:50:15 +0000 (0:00:01.284) 0:01:11.665 ************ 2025-05-25 00:56:00.773859 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.773869 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.773878 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.773893 | orchestrator | 2025-05-25 00:56:00.773903 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-25 00:56:00.773912 | orchestrator | Sunday 25 May 2025 00:50:16 +0000 (0:00:01.352) 0:01:13.017 ************ 2025-05-25 00:56:00.773941 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.773951 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.773966 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.773982 | orchestrator | 2025-05-25 00:56:00.773996 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-25 00:56:00.774006 | orchestrator | Sunday 25 May 2025 00:50:18 +0000 (0:00:02.145) 0:01:15.163 ************ 2025-05-25 00:56:00.774072 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.774090 | orchestrator | 2025-05-25 00:56:00.774100 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-25 00:56:00.774110 | orchestrator | Sunday 25 May 2025 00:50:19 +0000 (0:00:00.863) 0:01:16.026 ************ 2025-05-25 00:56:00.774129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.774142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.774152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.774163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.774178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.774201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.774212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.774222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.774233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.774243 | orchestrator | 2025-05-25 00:56:00.774253 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-25 00:56:00.774262 | orchestrator | Sunday 25 May 2025 00:50:24 +0000 (0:00:05.249) 0:01:21.275 ************ 2025-05-25 00:56:00.774277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.774300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.774311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.774321 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.774331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.774342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.774352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.774368 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.774388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.774400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.774410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.774420 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.774429 | orchestrator | 2025-05-25 00:56:00.774439 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-25 00:56:00.774449 | orchestrator | Sunday 25 May 2025 00:50:25 +0000 (0:00:00.764) 0:01:22.039 ************ 2025-05-25 00:56:00.774459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-25 00:56:00.774469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-25 00:56:00.774480 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.774490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-25 00:56:00.774500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-25 00:56:00.774515 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.774524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-25 00:56:00.774534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-25 00:56:00.774544 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.774554 | orchestrator | 2025-05-25 00:56:00.774570 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-25 00:56:00.774580 | orchestrator | Sunday 25 May 2025 00:50:26 +0000 (0:00:01.136) 0:01:23.176 ************ 2025-05-25 00:56:00.774589 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.774598 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.774608 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.774617 | orchestrator | 2025-05-25 00:56:00.774627 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-25 00:56:00.774641 | orchestrator | Sunday 25 May 2025 00:50:27 +0000 (0:00:01.189) 0:01:24.366 ************ 2025-05-25 00:56:00.774651 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.774660 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.774670 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.774679 | orchestrator | 2025-05-25 00:56:00.774688 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-25 00:56:00.774722 | orchestrator | Sunday 25 May 2025 00:50:30 +0000 (0:00:02.252) 0:01:26.618 ************ 2025-05-25 00:56:00.774732 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.774742 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.774751 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.774761 | orchestrator | 2025-05-25 00:56:00.774776 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-25 00:56:00.774786 | orchestrator | Sunday 25 May 2025 00:50:30 +0000 (0:00:00.289) 0:01:26.907 ************ 2025-05-25 00:56:00.774796 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.774805 | orchestrator | 2025-05-25 00:56:00.774815 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-25 00:56:00.774824 | orchestrator | Sunday 25 May 2025 00:50:31 +0000 (0:00:00.932) 0:01:27.840 ************ 2025-05-25 00:56:00.774835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-25 00:56:00.774846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-25 00:56:00.774863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-25 00:56:00.774874 | orchestrator | 2025-05-25 00:56:00.774883 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-25 00:56:00.774899 | orchestrator | Sunday 25 May 2025 00:50:35 +0000 (0:00:04.635) 0:01:32.476 ************ 2025-05-25 00:56:00.774920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-25 00:56:00.774930 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.774946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-25 00:56:00.774956 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.774967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-25 00:56:00.774983 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.774992 | orchestrator | 2025-05-25 00:56:00.775002 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-25 00:56:00.775012 | orchestrator | Sunday 25 May 2025 00:50:37 +0000 (0:00:01.617) 0:01:34.094 ************ 2025-05-25 00:56:00.775022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-25 00:56:00.775033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-25 00:56:00.775044 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.775054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-25 00:56:00.775064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-25 00:56:00.775074 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.775088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-25 00:56:00.775104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-25 00:56:00.775114 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.775123 | orchestrator | 2025-05-25 00:56:00.775139 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-25 00:56:00.775148 | orchestrator | Sunday 25 May 2025 00:50:39 +0000 (0:00:01.779) 0:01:35.873 ************ 2025-05-25 00:56:00.775158 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.775167 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.775177 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.775186 | orchestrator | 2025-05-25 00:56:00.775196 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-25 00:56:00.775205 | orchestrator | Sunday 25 May 2025 00:50:40 +0000 (0:00:00.709) 0:01:36.582 ************ 2025-05-25 00:56:00.775222 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.775232 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.775241 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.775251 | orchestrator | 2025-05-25 00:56:00.775261 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-25 00:56:00.775270 | orchestrator | Sunday 25 May 2025 00:50:41 +0000 (0:00:01.034) 0:01:37.617 ************ 2025-05-25 00:56:00.775280 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.775289 | orchestrator | 2025-05-25 00:56:00.775299 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-25 00:56:00.775308 | orchestrator | Sunday 25 May 2025 00:50:41 +0000 (0:00:00.772) 0:01:38.389 ************ 2025-05-25 00:56:00.775318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.775330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.775392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.775444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775480 | orchestrator | 2025-05-25 00:56:00.775490 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-25 00:56:00.775500 | orchestrator | Sunday 25 May 2025 00:50:45 +0000 (0:00:03.859) 0:01:42.249 ************ 2025-05-25 00:56:00.775510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.775525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775567 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.775577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.775587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775633 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.775643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.775653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.775689 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.775747 | orchestrator | 2025-05-25 00:56:00.775758 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-25 00:56:00.775767 | orchestrator | Sunday 25 May 2025 00:50:46 +0000 (0:00:00.986) 0:01:43.235 ************ 2025-05-25 00:56:00.775778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-25 00:56:00.775794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-25 00:56:00.775805 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.775815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-25 00:56:00.775825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-25 00:56:00.775835 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.775845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-25 00:56:00.775855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-25 00:56:00.775865 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.775874 | orchestrator | 2025-05-25 00:56:00.775884 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-25 00:56:00.775893 | orchestrator | Sunday 25 May 2025 00:50:47 +0000 (0:00:01.044) 0:01:44.279 ************ 2025-05-25 00:56:00.775903 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.775912 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.775922 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.775931 | orchestrator | 2025-05-25 00:56:00.775941 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-25 00:56:00.775950 | orchestrator | Sunday 25 May 2025 00:50:49 +0000 (0:00:01.552) 0:01:45.832 ************ 2025-05-25 00:56:00.775960 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.775977 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.775987 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.775997 | orchestrator | 2025-05-25 00:56:00.776006 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-25 00:56:00.776016 | orchestrator | Sunday 25 May 2025 00:50:51 +0000 (0:00:02.148) 0:01:47.980 ************ 2025-05-25 00:56:00.776025 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.776035 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.776045 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.776054 | orchestrator | 2025-05-25 00:56:00.776064 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-25 00:56:00.776073 | orchestrator | Sunday 25 May 2025 00:50:51 +0000 (0:00:00.287) 0:01:48.268 ************ 2025-05-25 00:56:00.776083 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.776093 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.776102 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.776117 | orchestrator | 2025-05-25 00:56:00.776127 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-25 00:56:00.776137 | orchestrator | Sunday 25 May 2025 00:50:52 +0000 (0:00:00.475) 0:01:48.743 ************ 2025-05-25 00:56:00.776146 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.776162 | orchestrator | 2025-05-25 00:56:00.776172 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-25 00:56:00.776182 | orchestrator | Sunday 25 May 2025 00:50:53 +0000 (0:00:01.108) 0:01:49.852 ************ 2025-05-25 00:56:00.776192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 00:56:00.776213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 00:56:00.776223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 00:56:00.776244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 00:56:00.776305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-25 00:56:00.776335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 00:56:00.776360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776426 | orchestrator | 2025-05-25 00:56:00.776438 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-25 00:56:00.776447 | orchestrator | Sunday 25 May 2025 00:50:58 +0000 (0:00:04.767) 0:01:54.619 ************ 2025-05-25 00:56:00.776455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 00:56:00.776463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 00:56:00.776476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 00:56:00.776531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 00:56:00.776552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776569 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.776577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776619 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.776628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-25 00:56:00.776641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-25 00:56:00.776649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.776720 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.776728 | orchestrator | 2025-05-25 00:56:00.776736 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-25 00:56:00.776744 | orchestrator | Sunday 25 May 2025 00:50:59 +0000 (0:00:01.203) 0:01:55.823 ************ 2025-05-25 00:56:00.776752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-25 00:56:00.776761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-25 00:56:00.776770 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.776778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-25 00:56:00.776786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-25 00:56:00.776794 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.776802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-25 00:56:00.776810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-25 00:56:00.776818 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.776826 | orchestrator | 2025-05-25 00:56:00.776834 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-25 00:56:00.776842 | orchestrator | Sunday 25 May 2025 00:51:00 +0000 (0:00:01.367) 0:01:57.191 ************ 2025-05-25 00:56:00.776850 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.776858 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.776865 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.776873 | orchestrator | 2025-05-25 00:56:00.776881 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-25 00:56:00.776889 | orchestrator | Sunday 25 May 2025 00:51:02 +0000 (0:00:01.292) 0:01:58.484 ************ 2025-05-25 00:56:00.776896 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.776908 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.776916 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.776924 | orchestrator | 2025-05-25 00:56:00.776932 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-25 00:56:00.776940 | orchestrator | Sunday 25 May 2025 00:51:04 +0000 (0:00:02.276) 0:02:00.761 ************ 2025-05-25 00:56:00.776947 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.776955 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.776963 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.776971 | orchestrator | 2025-05-25 00:56:00.776979 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-25 00:56:00.776991 | orchestrator | Sunday 25 May 2025 00:51:04 +0000 (0:00:00.482) 0:02:01.243 ************ 2025-05-25 00:56:00.777004 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.777012 | orchestrator | 2025-05-25 00:56:00.777020 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-25 00:56:00.777028 | orchestrator | Sunday 25 May 2025 00:51:06 +0000 (0:00:01.449) 0:02:02.692 ************ 2025-05-25 00:56:00.777037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 00:56:00.777051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-25 00:56:00.777067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 00:56:00.777084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-25 00:56:00.777103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-25 00:56:00.777117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-25 00:56:00.777126 | orchestrator | 2025-05-25 00:56:00.777134 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-25 00:56:00.777142 | orchestrator | Sunday 25 May 2025 00:51:11 +0000 (0:00:05.081) 0:02:07.774 ************ 2025-05-25 00:56:00.777163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-25 00:56:00.777178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-25 00:56:00.777187 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.777208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-25 00:56:00.777224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-25 00:56:00.777233 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.777246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-25 00:56:00.777277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-25 00:56:00.777287 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.777295 | orchestrator | 2025-05-25 00:56:00.777303 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-25 00:56:00.777311 | orchestrator | Sunday 25 May 2025 00:51:14 +0000 (0:00:03.241) 0:02:11.015 ************ 2025-05-25 00:56:00.777319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-25 00:56:00.777328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-25 00:56:00.777336 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.777344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-25 00:56:00.777366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-25 00:56:00.777375 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.777383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-25 00:56:00.777392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-25 00:56:00.777400 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.777408 | orchestrator | 2025-05-25 00:56:00.777416 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-25 00:56:00.777424 | orchestrator | Sunday 25 May 2025 00:51:18 +0000 (0:00:04.409) 0:02:15.425 ************ 2025-05-25 00:56:00.777432 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.777439 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.777447 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.777455 | orchestrator | 2025-05-25 00:56:00.777463 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-25 00:56:00.777471 | orchestrator | Sunday 25 May 2025 00:51:19 +0000 (0:00:01.041) 0:02:16.467 ************ 2025-05-25 00:56:00.777479 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.777486 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.777494 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.777502 | orchestrator | 2025-05-25 00:56:00.777510 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-25 00:56:00.777518 | orchestrator | Sunday 25 May 2025 00:51:21 +0000 (0:00:01.893) 0:02:18.360 ************ 2025-05-25 00:56:00.777525 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.777533 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.777541 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.777549 | orchestrator | 2025-05-25 00:56:00.777557 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-25 00:56:00.777565 | orchestrator | Sunday 25 May 2025 00:51:22 +0000 (0:00:00.359) 0:02:18.719 ************ 2025-05-25 00:56:00.777573 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.777580 | orchestrator | 2025-05-25 00:56:00.777588 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-25 00:56:00.777596 | orchestrator | Sunday 25 May 2025 00:51:23 +0000 (0:00:00.904) 0:02:19.624 ************ 2025-05-25 00:56:00.777605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 00:56:00.777621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 00:56:00.777635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 00:56:00.777644 | orchestrator | 2025-05-25 00:56:00.777651 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-25 00:56:00.777659 | orchestrator | Sunday 25 May 2025 00:51:26 +0000 (0:00:03.665) 0:02:23.289 ************ 2025-05-25 00:56:00.777667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 00:56:00.777676 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.777684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 00:56:00.777709 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.777718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 00:56:00.777731 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.777739 | orchestrator | 2025-05-25 00:56:00.777747 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-25 00:56:00.777755 | orchestrator | Sunday 25 May 2025 00:51:27 +0000 (0:00:00.548) 0:02:23.837 ************ 2025-05-25 00:56:00.777763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-25 00:56:00.777771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-25 00:56:00.777779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-25 00:56:00.777791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-25 00:56:00.777799 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.777807 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.777814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-25 00:56:00.777827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-25 00:56:00.777835 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.777843 | orchestrator | 2025-05-25 00:56:00.777851 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-25 00:56:00.777858 | orchestrator | Sunday 25 May 2025 00:51:28 +0000 (0:00:00.848) 0:02:24.686 ************ 2025-05-25 00:56:00.777866 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.777874 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.777882 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.777890 | orchestrator | 2025-05-25 00:56:00.777897 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-25 00:56:00.777905 | orchestrator | Sunday 25 May 2025 00:51:29 +0000 (0:00:01.129) 0:02:25.816 ************ 2025-05-25 00:56:00.777913 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.777921 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.777929 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.777936 | orchestrator | 2025-05-25 00:56:00.777944 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-25 00:56:00.777952 | orchestrator | Sunday 25 May 2025 00:51:31 +0000 (0:00:02.024) 0:02:27.840 ************ 2025-05-25 00:56:00.777960 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.777967 | orchestrator | 2025-05-25 00:56:00.777975 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-25 00:56:00.777983 | orchestrator | Sunday 25 May 2025 00:51:32 +0000 (0:00:01.420) 0:02:29.261 ************ 2025-05-25 00:56:00.777991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.778005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.778040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.778056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.778065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.778078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.778087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.778095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.778110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.778118 | orchestrator | 2025-05-25 00:56:00.778130 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-25 00:56:00.778144 | orchestrator | Sunday 25 May 2025 00:51:39 +0000 (0:00:06.943) 0:02:36.205 ************ 2025-05-25 00:56:00.778152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.778166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.778175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.778183 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.778191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.778212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.778221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.778234 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.778245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.778258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.778272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.778285 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.778298 | orchestrator | 2025-05-25 00:56:00.778312 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-25 00:56:00.778321 | orchestrator | Sunday 25 May 2025 00:51:40 +0000 (0:00:00.717) 0:02:36.923 ************ 2025-05-25 00:56:00.778330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-25 00:56:00.778342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-25 00:56:00.778351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-25 00:56:00.778364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-25 00:56:00.778372 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.778381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-25 00:56:00.778394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-25 00:56:00.778402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-25 00:56:00.778410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-25 00:56:00.778419 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.778426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-25 00:56:00.778434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-25 00:56:00.778442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-25 00:56:00.778451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-25 00:56:00.778459 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.778466 | orchestrator | 2025-05-25 00:56:00.778474 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-25 00:56:00.778482 | orchestrator | Sunday 25 May 2025 00:51:41 +0000 (0:00:01.154) 0:02:38.078 ************ 2025-05-25 00:56:00.778490 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.778498 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.778505 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.778513 | orchestrator | 2025-05-25 00:56:00.778521 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-25 00:56:00.778529 | orchestrator | Sunday 25 May 2025 00:51:42 +0000 (0:00:01.209) 0:02:39.287 ************ 2025-05-25 00:56:00.778537 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.778544 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.778552 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.778560 | orchestrator | 2025-05-25 00:56:00.778568 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-25 00:56:00.778575 | orchestrator | Sunday 25 May 2025 00:51:44 +0000 (0:00:02.093) 0:02:41.381 ************ 2025-05-25 00:56:00.778583 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.778591 | orchestrator | 2025-05-25 00:56:00.778599 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-25 00:56:00.778607 | orchestrator | Sunday 25 May 2025 00:51:45 +0000 (0:00:01.063) 0:02:42.444 ************ 2025-05-25 00:56:00.778626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 00:56:00.778643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 00:56:00.778663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 00:56:00.778678 | orchestrator | 2025-05-25 00:56:00.778686 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-25 00:56:00.778742 | orchestrator | Sunday 25 May 2025 00:51:49 +0000 (0:00:03.961) 0:02:46.406 ************ 2025-05-25 00:56:00.778757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 00:56:00.778772 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.778788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 00:56:00.778798 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.778811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 00:56:00.778825 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.778833 | orchestrator | 2025-05-25 00:56:00.778845 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-25 00:56:00.778853 | orchestrator | Sunday 25 May 2025 00:51:50 +0000 (0:00:00.989) 0:02:47.396 ************ 2025-05-25 00:56:00.778862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-25 00:56:00.778872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-25 00:56:00.778880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-25 00:56:00.778889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-25 00:56:00.778897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-25 00:56:00.778905 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.778913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-25 00:56:00.778921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-25 00:56:00.778930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-25 00:56:00.778945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-25 00:56:00.778958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-25 00:56:00.778966 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.778974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-25 00:56:00.778982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-25 00:56:00.778994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-25 00:56:00.779002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-25 00:56:00.779010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-25 00:56:00.779018 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.779026 | orchestrator | 2025-05-25 00:56:00.779034 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-25 00:56:00.779042 | orchestrator | Sunday 25 May 2025 00:51:52 +0000 (0:00:01.229) 0:02:48.625 ************ 2025-05-25 00:56:00.779050 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.779058 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.779066 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.779073 | orchestrator | 2025-05-25 00:56:00.779081 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-25 00:56:00.779089 | orchestrator | Sunday 25 May 2025 00:51:53 +0000 (0:00:01.346) 0:02:49.971 ************ 2025-05-25 00:56:00.779097 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.779105 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.779112 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.779120 | orchestrator | 2025-05-25 00:56:00.779128 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-25 00:56:00.779136 | orchestrator | Sunday 25 May 2025 00:51:55 +0000 (0:00:02.201) 0:02:52.173 ************ 2025-05-25 00:56:00.779143 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.779151 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.779159 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.779167 | orchestrator | 2025-05-25 00:56:00.779174 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-25 00:56:00.779182 | orchestrator | Sunday 25 May 2025 00:51:56 +0000 (0:00:00.437) 0:02:52.611 ************ 2025-05-25 00:56:00.779190 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.779198 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.779206 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.779214 | orchestrator | 2025-05-25 00:56:00.779222 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-25 00:56:00.779237 | orchestrator | Sunday 25 May 2025 00:51:56 +0000 (0:00:00.273) 0:02:52.884 ************ 2025-05-25 00:56:00.779256 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.779264 | orchestrator | 2025-05-25 00:56:00.779271 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-25 00:56:00.779277 | orchestrator | Sunday 25 May 2025 00:51:57 +0000 (0:00:01.201) 0:02:54.086 ************ 2025-05-25 00:56:00.779284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:56:00.779308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:56:00.779320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 00:56:00.779329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:56:00.779336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:56:00.779348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 00:56:00.779359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:56:00.779370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:56:00.779378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 00:56:00.779385 | orchestrator | 2025-05-25 00:56:00.779391 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-25 00:56:00.779398 | orchestrator | Sunday 25 May 2025 00:52:01 +0000 (0:00:04.345) 0:02:58.432 ************ 2025-05-25 00:56:00.779406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-25 00:56:00.779418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:56:00.779425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 00:56:00.779432 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.779447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-25 00:56:00.779454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:56:00.779461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 00:56:00.779473 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.779480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-25 00:56:00.779487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:56:00.779498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 00:56:00.779505 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.779512 | orchestrator | 2025-05-25 00:56:00.779519 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-25 00:56:00.779525 | orchestrator | Sunday 25 May 2025 00:52:02 +0000 (0:00:00.758) 0:02:59.190 ************ 2025-05-25 00:56:00.779536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-25 00:56:00.779543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-25 00:56:00.779550 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.779557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-25 00:56:00.779568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-25 00:56:00.779575 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.779582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-25 00:56:00.779589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-25 00:56:00.779596 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.779602 | orchestrator | 2025-05-25 00:56:00.779609 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-25 00:56:00.779616 | orchestrator | Sunday 25 May 2025 00:52:03 +0000 (0:00:01.193) 0:03:00.384 ************ 2025-05-25 00:56:00.779623 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.779629 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.779636 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.779642 | orchestrator | 2025-05-25 00:56:00.779649 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-25 00:56:00.779656 | orchestrator | Sunday 25 May 2025 00:52:05 +0000 (0:00:01.299) 0:03:01.683 ************ 2025-05-25 00:56:00.779667 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.779674 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.779680 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.779687 | orchestrator | 2025-05-25 00:56:00.779711 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-25 00:56:00.779718 | orchestrator | Sunday 25 May 2025 00:52:07 +0000 (0:00:02.231) 0:03:03.915 ************ 2025-05-25 00:56:00.779725 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.779731 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.779738 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.779745 | orchestrator | 2025-05-25 00:56:00.779751 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-25 00:56:00.779758 | orchestrator | Sunday 25 May 2025 00:52:07 +0000 (0:00:00.357) 0:03:04.272 ************ 2025-05-25 00:56:00.779764 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.779771 | orchestrator | 2025-05-25 00:56:00.779777 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-25 00:56:00.779784 | orchestrator | Sunday 25 May 2025 00:52:09 +0000 (0:00:01.293) 0:03:05.566 ************ 2025-05-25 00:56:00.779796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 00:56:00.779823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.779844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 00:56:00.779855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.779867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-25 00:56:00.779884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.779902 | orchestrator | 2025-05-25 00:56:00.779913 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-25 00:56:00.779924 | orchestrator | Sunday 25 May 2025 00:52:12 +0000 (0:00:03.894) 0:03:09.461 ************ 2025-05-25 00:56:00.779941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 00:56:00.779954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.779961 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.779968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 00:56:00.779975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.779982 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.779997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-25 00:56:00.780012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780019 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.780026 | orchestrator | 2025-05-25 00:56:00.780045 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-25 00:56:00.780060 | orchestrator | Sunday 25 May 2025 00:52:13 +0000 (0:00:00.662) 0:03:10.123 ************ 2025-05-25 00:56:00.780067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-25 00:56:00.780081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-25 00:56:00.780088 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.780095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-25 00:56:00.780102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-25 00:56:00.780109 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.780115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-25 00:56:00.780122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-25 00:56:00.780129 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.780136 | orchestrator | 2025-05-25 00:56:00.780142 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-25 00:56:00.780149 | orchestrator | Sunday 25 May 2025 00:52:14 +0000 (0:00:01.046) 0:03:11.169 ************ 2025-05-25 00:56:00.780156 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.780162 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.780169 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.780176 | orchestrator | 2025-05-25 00:56:00.780182 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-25 00:56:00.780189 | orchestrator | Sunday 25 May 2025 00:52:15 +0000 (0:00:01.209) 0:03:12.379 ************ 2025-05-25 00:56:00.780195 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.780206 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.780213 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.780220 | orchestrator | 2025-05-25 00:56:00.780226 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-25 00:56:00.780233 | orchestrator | Sunday 25 May 2025 00:52:18 +0000 (0:00:02.216) 0:03:14.595 ************ 2025-05-25 00:56:00.780240 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.780248 | orchestrator | 2025-05-25 00:56:00.780259 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-25 00:56:00.780270 | orchestrator | Sunday 25 May 2025 00:52:19 +0000 (0:00:01.137) 0:03:15.733 ************ 2025-05-25 00:56:00.780292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-25 00:56:00.780305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-25 00:56:00.780357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-25 00:56:00.780394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780420 | orchestrator | 2025-05-25 00:56:00.780427 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-25 00:56:00.780433 | orchestrator | Sunday 25 May 2025 00:52:23 +0000 (0:00:04.337) 0:03:20.071 ************ 2025-05-25 00:56:00.780448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-25 00:56:00.780455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780482 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.780490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-25 00:56:00.780500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780525 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.780532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-25 00:56:00.780539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.780568 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.780575 | orchestrator | 2025-05-25 00:56:00.780581 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-25 00:56:00.780588 | orchestrator | Sunday 25 May 2025 00:52:24 +0000 (0:00:01.133) 0:03:21.204 ************ 2025-05-25 00:56:00.780595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-25 00:56:00.780606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-25 00:56:00.780613 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.780619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-25 00:56:00.780626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-25 00:56:00.780633 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.780639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-25 00:56:00.780646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-25 00:56:00.780653 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.780660 | orchestrator | 2025-05-25 00:56:00.780666 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-25 00:56:00.780673 | orchestrator | Sunday 25 May 2025 00:52:25 +0000 (0:00:01.154) 0:03:22.358 ************ 2025-05-25 00:56:00.780679 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.780708 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.780717 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.780724 | orchestrator | 2025-05-25 00:56:00.780730 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-25 00:56:00.780737 | orchestrator | Sunday 25 May 2025 00:52:27 +0000 (0:00:01.374) 0:03:23.733 ************ 2025-05-25 00:56:00.780743 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.780750 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.780756 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.780763 | orchestrator | 2025-05-25 00:56:00.780769 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-25 00:56:00.780776 | orchestrator | Sunday 25 May 2025 00:52:29 +0000 (0:00:02.180) 0:03:25.913 ************ 2025-05-25 00:56:00.780782 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.780789 | orchestrator | 2025-05-25 00:56:00.780795 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-25 00:56:00.780802 | orchestrator | Sunday 25 May 2025 00:52:30 +0000 (0:00:01.467) 0:03:27.380 ************ 2025-05-25 00:56:00.780809 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 00:56:00.780816 | orchestrator | 2025-05-25 00:56:00.780822 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-25 00:56:00.780829 | orchestrator | Sunday 25 May 2025 00:52:34 +0000 (0:00:03.126) 0:03:30.507 ************ 2025-05-25 00:56:00.780839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 00:56:00.780852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-25 00:56:00.780864 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.780871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 00:56:00.780879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-25 00:56:00.780886 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.780900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 00:56:00.780914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-25 00:56:00.780922 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.780928 | orchestrator | 2025-05-25 00:56:00.780935 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-25 00:56:00.780941 | orchestrator | Sunday 25 May 2025 00:52:37 +0000 (0:00:03.715) 0:03:34.222 ************ 2025-05-25 00:56:00.780948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 00:56:00.780963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-25 00:56:00.780970 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.780977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 00:56:00.780989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-25 00:56:00.780996 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.781012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-25 00:56:00.781024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-25 00:56:00.781031 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.781038 | orchestrator | 2025-05-25 00:56:00.781045 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-25 00:56:00.781051 | orchestrator | Sunday 25 May 2025 00:52:40 +0000 (0:00:03.234) 0:03:37.456 ************ 2025-05-25 00:56:00.781058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-25 00:56:00.781065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-25 00:56:00.781072 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.781079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-25 00:56:00.781089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-25 00:56:00.781096 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.781107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-25 00:56:00.781119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-25 00:56:00.781126 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.781133 | orchestrator | 2025-05-25 00:56:00.781140 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-25 00:56:00.781147 | orchestrator | Sunday 25 May 2025 00:52:44 +0000 (0:00:03.614) 0:03:41.071 ************ 2025-05-25 00:56:00.781153 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.781160 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.781166 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.781173 | orchestrator | 2025-05-25 00:56:00.781180 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-25 00:56:00.781186 | orchestrator | Sunday 25 May 2025 00:52:46 +0000 (0:00:02.254) 0:03:43.325 ************ 2025-05-25 00:56:00.781193 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.781199 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.781206 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.781212 | orchestrator | 2025-05-25 00:56:00.781219 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-25 00:56:00.781225 | orchestrator | Sunday 25 May 2025 00:52:48 +0000 (0:00:01.758) 0:03:45.083 ************ 2025-05-25 00:56:00.781232 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.781238 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.781245 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.781251 | orchestrator | 2025-05-25 00:56:00.781258 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-25 00:56:00.781265 | orchestrator | Sunday 25 May 2025 00:52:49 +0000 (0:00:00.505) 0:03:45.589 ************ 2025-05-25 00:56:00.781271 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.781278 | orchestrator | 2025-05-25 00:56:00.781288 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-25 00:56:00.781299 | orchestrator | Sunday 25 May 2025 00:52:50 +0000 (0:00:01.385) 0:03:46.975 ************ 2025-05-25 00:56:00.781311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-25 00:56:00.781324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-25 00:56:00.781349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-25 00:56:00.781357 | orchestrator | 2025-05-25 00:56:00.781364 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-25 00:56:00.781370 | orchestrator | Sunday 25 May 2025 00:52:52 +0000 (0:00:01.849) 0:03:48.824 ************ 2025-05-25 00:56:00.781377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-25 00:56:00.781384 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.781391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-25 00:56:00.781398 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.781405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-25 00:56:00.781419 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.781426 | orchestrator | 2025-05-25 00:56:00.781432 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-25 00:56:00.781439 | orchestrator | Sunday 25 May 2025 00:52:52 +0000 (0:00:00.379) 0:03:49.204 ************ 2025-05-25 00:56:00.781446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-25 00:56:00.781453 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.781463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-25 00:56:00.781469 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.781476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-25 00:56:00.781483 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.781490 | orchestrator | 2025-05-25 00:56:00.781500 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-25 00:56:00.781507 | orchestrator | Sunday 25 May 2025 00:52:53 +0000 (0:00:00.980) 0:03:50.185 ************ 2025-05-25 00:56:00.781514 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.781520 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.781527 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.781534 | orchestrator | 2025-05-25 00:56:00.781540 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-25 00:56:00.781547 | orchestrator | Sunday 25 May 2025 00:52:54 +0000 (0:00:00.851) 0:03:51.036 ************ 2025-05-25 00:56:00.781553 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.781560 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.781567 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.781573 | orchestrator | 2025-05-25 00:56:00.781580 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-25 00:56:00.781587 | orchestrator | Sunday 25 May 2025 00:52:56 +0000 (0:00:01.498) 0:03:52.535 ************ 2025-05-25 00:56:00.781593 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.781600 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.781606 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.781613 | orchestrator | 2025-05-25 00:56:00.781620 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-25 00:56:00.781626 | orchestrator | Sunday 25 May 2025 00:52:56 +0000 (0:00:00.304) 0:03:52.839 ************ 2025-05-25 00:56:00.781633 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.781639 | orchestrator | 2025-05-25 00:56:00.781646 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-25 00:56:00.781652 | orchestrator | Sunday 25 May 2025 00:52:57 +0000 (0:00:01.509) 0:03:54.349 ************ 2025-05-25 00:56:00.781659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 00:56:00.781671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.781678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.781712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 00:56:00.781722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.781729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-25 00:56:00.781741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.781748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.781962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.781985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.781998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-25 00:56:00.782078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-25 00:56:00.782096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 00:56:00.782130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.782185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-25 00:56:00.782200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 00:56:00.782218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-25 00:56:00.782280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.782287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-25 00:56:00.782306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 00:56:00.782353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-25 00:56:00.782361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-25 00:56:00.782375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.782388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-25 00:56:00.782424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-25 00:56:00.782435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782442 | orchestrator | 2025-05-25 00:56:00.782449 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-25 00:56:00.782455 | orchestrator | Sunday 25 May 2025 00:53:02 +0000 (0:00:04.707) 0:03:59.057 ************ 2025-05-25 00:56:00.782466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 00:56:00.782479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-25 00:56:00.782516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 00:56:00.782528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-25 00:56:00.782604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 00:56:00.782612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.782658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-25 00:56:00.782744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-25 00:56:00.782761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-25 00:56:00.782786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782811 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.782823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 00:56:00.782832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-25 00:56:00.782857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.782873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-25 00:56:00.782917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.782928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-25 00:56:00.782943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782957 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.782964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-25 00:56:00.782971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.782978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.782989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-25 00:56:00.783005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-25 00:56:00.783020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-25 00:56:00.783027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783033 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.783039 | orchestrator | 2025-05-25 00:56:00.783046 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-25 00:56:00.783052 | orchestrator | Sunday 25 May 2025 00:53:04 +0000 (0:00:01.838) 0:04:00.895 ************ 2025-05-25 00:56:00.783063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-25 00:56:00.783070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-25 00:56:00.783077 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.783083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-25 00:56:00.783089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-25 00:56:00.783096 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.783105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-25 00:56:00.783111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-25 00:56:00.783118 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.783124 | orchestrator | 2025-05-25 00:56:00.783130 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-25 00:56:00.783230 | orchestrator | Sunday 25 May 2025 00:53:06 +0000 (0:00:01.996) 0:04:02.891 ************ 2025-05-25 00:56:00.783245 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.783252 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.783258 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.783264 | orchestrator | 2025-05-25 00:56:00.783270 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-25 00:56:00.783276 | orchestrator | Sunday 25 May 2025 00:53:07 +0000 (0:00:01.415) 0:04:04.306 ************ 2025-05-25 00:56:00.783283 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.783289 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.783295 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.783301 | orchestrator | 2025-05-25 00:56:00.783307 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-25 00:56:00.783314 | orchestrator | Sunday 25 May 2025 00:53:10 +0000 (0:00:02.355) 0:04:06.662 ************ 2025-05-25 00:56:00.783320 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.783326 | orchestrator | 2025-05-25 00:56:00.783332 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-25 00:56:00.783338 | orchestrator | Sunday 25 May 2025 00:53:11 +0000 (0:00:01.579) 0:04:08.241 ************ 2025-05-25 00:56:00.783345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.783358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.783365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.783371 | orchestrator | 2025-05-25 00:56:00.783381 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-25 00:56:00.783388 | orchestrator | Sunday 25 May 2025 00:53:15 +0000 (0:00:03.779) 0:04:12.021 ************ 2025-05-25 00:56:00.783398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.783405 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.783412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.783423 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.783429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.783436 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.783442 | orchestrator | 2025-05-25 00:56:00.783448 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-25 00:56:00.783454 | orchestrator | Sunday 25 May 2025 00:53:16 +0000 (0:00:00.719) 0:04:12.740 ************ 2025-05-25 00:56:00.783461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783475 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.783482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783498 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.783504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783520 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.783526 | orchestrator | 2025-05-25 00:56:00.783532 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-25 00:56:00.783538 | orchestrator | Sunday 25 May 2025 00:53:17 +0000 (0:00:00.951) 0:04:13.691 ************ 2025-05-25 00:56:00.783545 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.783551 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.783557 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.783563 | orchestrator | 2025-05-25 00:56:00.783569 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-25 00:56:00.783575 | orchestrator | Sunday 25 May 2025 00:53:18 +0000 (0:00:01.522) 0:04:15.214 ************ 2025-05-25 00:56:00.783581 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.783588 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.783594 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.783600 | orchestrator | 2025-05-25 00:56:00.783606 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-25 00:56:00.783616 | orchestrator | Sunday 25 May 2025 00:53:21 +0000 (0:00:02.303) 0:04:17.518 ************ 2025-05-25 00:56:00.783622 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.783628 | orchestrator | 2025-05-25 00:56:00.783634 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-25 00:56:00.783640 | orchestrator | Sunday 25 May 2025 00:53:22 +0000 (0:00:01.608) 0:04:19.127 ************ 2025-05-25 00:56:00.783647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.783655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.783686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.783712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783745 | orchestrator | 2025-05-25 00:56:00.783751 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-25 00:56:00.783762 | orchestrator | Sunday 25 May 2025 00:53:27 +0000 (0:00:04.971) 0:04:24.098 ************ 2025-05-25 00:56:00.783769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.783775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783788 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.783801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.783808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783825 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.783831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.783838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.783856 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.783863 | orchestrator | 2025-05-25 00:56:00.783870 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-25 00:56:00.783877 | orchestrator | Sunday 25 May 2025 00:53:28 +0000 (0:00:00.829) 0:04:24.928 ************ 2025-05-25 00:56:00.783894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783925 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.783932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783962 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.783970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-25 00:56:00.783993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-25 00:56:00.784000 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.784008 | orchestrator | 2025-05-25 00:56:00.784015 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-25 00:56:00.784022 | orchestrator | Sunday 25 May 2025 00:53:29 +0000 (0:00:01.292) 0:04:26.220 ************ 2025-05-25 00:56:00.784030 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.784037 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.784044 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.784051 | orchestrator | 2025-05-25 00:56:00.784059 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-25 00:56:00.784066 | orchestrator | Sunday 25 May 2025 00:53:31 +0000 (0:00:01.412) 0:04:27.633 ************ 2025-05-25 00:56:00.784073 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.784081 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.784093 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.784100 | orchestrator | 2025-05-25 00:56:00.784107 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-25 00:56:00.784114 | orchestrator | Sunday 25 May 2025 00:53:33 +0000 (0:00:02.380) 0:04:30.014 ************ 2025-05-25 00:56:00.784121 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.784128 | orchestrator | 2025-05-25 00:56:00.784136 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-25 00:56:00.784143 | orchestrator | Sunday 25 May 2025 00:53:34 +0000 (0:00:01.456) 0:04:31.471 ************ 2025-05-25 00:56:00.784154 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-25 00:56:00.784161 | orchestrator | 2025-05-25 00:56:00.784168 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-25 00:56:00.784178 | orchestrator | Sunday 25 May 2025 00:53:36 +0000 (0:00:01.523) 0:04:32.994 ************ 2025-05-25 00:56:00.784192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-25 00:56:00.784204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-25 00:56:00.784215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-25 00:56:00.784225 | orchestrator | 2025-05-25 00:56:00.784236 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-25 00:56:00.784246 | orchestrator | Sunday 25 May 2025 00:53:41 +0000 (0:00:04.718) 0:04:37.713 ************ 2025-05-25 00:56:00.784254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 00:56:00.784261 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.784267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 00:56:00.784281 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.784287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 00:56:00.784294 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.784300 | orchestrator | 2025-05-25 00:56:00.784306 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-25 00:56:00.784312 | orchestrator | Sunday 25 May 2025 00:53:42 +0000 (0:00:01.330) 0:04:39.043 ************ 2025-05-25 00:56:00.784318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-25 00:56:00.784328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-25 00:56:00.784335 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.784341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-25 00:56:00.784351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-25 00:56:00.784357 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.784364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-25 00:56:00.784371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-25 00:56:00.784377 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.784383 | orchestrator | 2025-05-25 00:56:00.784389 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-25 00:56:00.784395 | orchestrator | Sunday 25 May 2025 00:53:44 +0000 (0:00:01.991) 0:04:41.035 ************ 2025-05-25 00:56:00.784401 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.784407 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.784414 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.784420 | orchestrator | 2025-05-25 00:56:00.784426 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-25 00:56:00.784432 | orchestrator | Sunday 25 May 2025 00:53:47 +0000 (0:00:02.819) 0:04:43.855 ************ 2025-05-25 00:56:00.784438 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.784444 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.784450 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.784456 | orchestrator | 2025-05-25 00:56:00.784463 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-25 00:56:00.784469 | orchestrator | Sunday 25 May 2025 00:53:50 +0000 (0:00:03.370) 0:04:47.225 ************ 2025-05-25 00:56:00.784475 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-25 00:56:00.784485 | orchestrator | 2025-05-25 00:56:00.784491 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-25 00:56:00.784497 | orchestrator | Sunday 25 May 2025 00:53:51 +0000 (0:00:01.246) 0:04:48.472 ************ 2025-05-25 00:56:00.784504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 00:56:00.784510 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.784517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 00:56:00.784523 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.784530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 00:56:00.784538 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.784548 | orchestrator | 2025-05-25 00:56:00.784561 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-25 00:56:00.784572 | orchestrator | Sunday 25 May 2025 00:53:53 +0000 (0:00:01.651) 0:04:50.123 ************ 2025-05-25 00:56:00.784587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 00:56:00.784597 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.784608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 00:56:00.784619 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.784630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-25 00:56:00.784642 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.784648 | orchestrator | 2025-05-25 00:56:00.784655 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-25 00:56:00.784661 | orchestrator | Sunday 25 May 2025 00:53:55 +0000 (0:00:01.875) 0:04:51.999 ************ 2025-05-25 00:56:00.784667 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.784673 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.784679 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.784685 | orchestrator | 2025-05-25 00:56:00.784707 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-25 00:56:00.784714 | orchestrator | Sunday 25 May 2025 00:53:57 +0000 (0:00:01.782) 0:04:53.781 ************ 2025-05-25 00:56:00.784721 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.784727 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.784733 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.784739 | orchestrator | 2025-05-25 00:56:00.784746 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-25 00:56:00.784752 | orchestrator | Sunday 25 May 2025 00:54:00 +0000 (0:00:02.961) 0:04:56.743 ************ 2025-05-25 00:56:00.784758 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.784764 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.784770 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.784776 | orchestrator | 2025-05-25 00:56:00.784782 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-25 00:56:00.784789 | orchestrator | Sunday 25 May 2025 00:54:03 +0000 (0:00:03.438) 0:05:00.181 ************ 2025-05-25 00:56:00.784795 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-25 00:56:00.784801 | orchestrator | 2025-05-25 00:56:00.784807 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-25 00:56:00.784813 | orchestrator | Sunday 25 May 2025 00:54:04 +0000 (0:00:01.268) 0:05:01.449 ************ 2025-05-25 00:56:00.784819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-25 00:56:00.784826 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.784836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-25 00:56:00.784843 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.784852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-25 00:56:00.784863 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.784869 | orchestrator | 2025-05-25 00:56:00.784876 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-25 00:56:00.784882 | orchestrator | Sunday 25 May 2025 00:54:06 +0000 (0:00:01.429) 0:05:02.879 ************ 2025-05-25 00:56:00.784888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-25 00:56:00.784895 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.784901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-25 00:56:00.784907 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.784913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-25 00:56:00.784920 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.784926 | orchestrator | 2025-05-25 00:56:00.784932 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-25 00:56:00.784939 | orchestrator | Sunday 25 May 2025 00:54:08 +0000 (0:00:01.812) 0:05:04.691 ************ 2025-05-25 00:56:00.784945 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.784951 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.784957 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.784963 | orchestrator | 2025-05-25 00:56:00.784969 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-25 00:56:00.784976 | orchestrator | Sunday 25 May 2025 00:54:10 +0000 (0:00:01.986) 0:05:06.678 ************ 2025-05-25 00:56:00.784982 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.784988 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.784994 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.785000 | orchestrator | 2025-05-25 00:56:00.785006 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-25 00:56:00.785012 | orchestrator | Sunday 25 May 2025 00:54:13 +0000 (0:00:03.024) 0:05:09.702 ************ 2025-05-25 00:56:00.785018 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.785025 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.785031 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.785037 | orchestrator | 2025-05-25 00:56:00.785043 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-25 00:56:00.785049 | orchestrator | Sunday 25 May 2025 00:54:17 +0000 (0:00:03.938) 0:05:13.640 ************ 2025-05-25 00:56:00.785055 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.785062 | orchestrator | 2025-05-25 00:56:00.785068 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-25 00:56:00.785074 | orchestrator | Sunday 25 May 2025 00:54:18 +0000 (0:00:01.742) 0:05:15.383 ************ 2025-05-25 00:56:00.785090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.785098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 00:56:00.785104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.785111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.785118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.785124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.785139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 00:56:00.785149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.785156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.785163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.785169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.785175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 00:56:00.785191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.785201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.785207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.785214 | orchestrator | 2025-05-25 00:56:00.785220 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-25 00:56:00.785226 | orchestrator | Sunday 25 May 2025 00:54:23 +0000 (0:00:04.436) 0:05:19.819 ************ 2025-05-25 00:56:00.785233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.785240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 00:56:00.785246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.785257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.785267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.785273 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.785280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.785338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 00:56:00.785354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.785365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.785374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.785381 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.785392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.785399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-25 00:56:00.785406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.785412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-25 00:56:00.785422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-25 00:56:00.785429 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.785435 | orchestrator | 2025-05-25 00:56:00.785441 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-25 00:56:00.785447 | orchestrator | Sunday 25 May 2025 00:54:24 +0000 (0:00:01.124) 0:05:20.944 ************ 2025-05-25 00:56:00.785454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-25 00:56:00.785463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-25 00:56:00.785470 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.785476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-25 00:56:00.785545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-25 00:56:00.785553 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.785560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-25 00:56:00.785566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-25 00:56:00.785572 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.785579 | orchestrator | 2025-05-25 00:56:00.785585 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-25 00:56:00.785591 | orchestrator | Sunday 25 May 2025 00:54:25 +0000 (0:00:01.085) 0:05:22.029 ************ 2025-05-25 00:56:00.785597 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.785603 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.785609 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.785615 | orchestrator | 2025-05-25 00:56:00.785621 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-25 00:56:00.785628 | orchestrator | Sunday 25 May 2025 00:54:26 +0000 (0:00:01.339) 0:05:23.369 ************ 2025-05-25 00:56:00.785634 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.785640 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.785646 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.785652 | orchestrator | 2025-05-25 00:56:00.785658 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-25 00:56:00.785664 | orchestrator | Sunday 25 May 2025 00:54:29 +0000 (0:00:02.442) 0:05:25.812 ************ 2025-05-25 00:56:00.785671 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.785677 | orchestrator | 2025-05-25 00:56:00.785687 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-25 00:56:00.785737 | orchestrator | Sunday 25 May 2025 00:54:30 +0000 (0:00:01.506) 0:05:27.318 ************ 2025-05-25 00:56:00.785746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 00:56:00.785754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 00:56:00.785785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-25 00:56:00.785794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 00:56:00.785802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 00:56:00.785814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-25 00:56:00.785821 | orchestrator | 2025-05-25 00:56:00.785828 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-25 00:56:00.785834 | orchestrator | Sunday 25 May 2025 00:54:37 +0000 (0:00:06.221) 0:05:33.540 ************ 2025-05-25 00:56:00.785860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-25 00:56:00.785868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-25 00:56:00.785880 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.785886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-25 00:56:00.785893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-25 00:56:00.785900 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.785910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-25 00:56:00.785934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-25 00:56:00.785946 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.785952 | orchestrator | 2025-05-25 00:56:00.785958 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-25 00:56:00.785965 | orchestrator | Sunday 25 May 2025 00:54:37 +0000 (0:00:00.896) 0:05:34.436 ************ 2025-05-25 00:56:00.785971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-25 00:56:00.785978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-25 00:56:00.785985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-25 00:56:00.785991 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.785997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-25 00:56:00.786003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-25 00:56:00.786008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-25 00:56:00.786014 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.786043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-25 00:56:00.786049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-25 00:56:00.786055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-25 00:56:00.786060 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.786066 | orchestrator | 2025-05-25 00:56:00.786071 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-25 00:56:00.786080 | orchestrator | Sunday 25 May 2025 00:54:39 +0000 (0:00:01.291) 0:05:35.728 ************ 2025-05-25 00:56:00.786085 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.786091 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.786096 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.786101 | orchestrator | 2025-05-25 00:56:00.786107 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-25 00:56:00.786112 | orchestrator | Sunday 25 May 2025 00:54:39 +0000 (0:00:00.705) 0:05:36.434 ************ 2025-05-25 00:56:00.786117 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.786123 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.786128 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.786133 | orchestrator | 2025-05-25 00:56:00.786156 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-25 00:56:00.786167 | orchestrator | Sunday 25 May 2025 00:54:41 +0000 (0:00:01.748) 0:05:38.183 ************ 2025-05-25 00:56:00.786173 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.786178 | orchestrator | 2025-05-25 00:56:00.786184 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-25 00:56:00.786189 | orchestrator | Sunday 25 May 2025 00:54:43 +0000 (0:00:01.839) 0:05:40.022 ************ 2025-05-25 00:56:00.786195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 00:56:00.786201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 00:56:00.786207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 00:56:00.786230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 00:56:00.786254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 00:56:00.786261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 00:56:00.786267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 00:56:00.786272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 00:56:00.786317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 00:56:00.786330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 00:56:00.786336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 00:56:00.786345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 00:56:00.786370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 00:56:00.786383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 00:56:00.786400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 00:56:00.786426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 00:56:00.786432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 00:56:00.786444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 00:56:00.786472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786478 | orchestrator | 2025-05-25 00:56:00.786483 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-25 00:56:00.786489 | orchestrator | Sunday 25 May 2025 00:54:48 +0000 (0:00:04.813) 0:05:44.835 ************ 2025-05-25 00:56:00.786495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 00:56:00.786500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 00:56:00.786506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 00:56:00.786533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 00:56:00.786540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 00:56:00.786546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 00:56:00.786569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786574 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.786584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 00:56:00.786589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 00:56:00.786595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 00:56:00.786616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 00:56:00.786629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 00:56:00.786635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 00:56:00.786653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786662 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.786667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 00:56:00.786676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 00:56:00.786684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 00:56:00.786718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 00:56:00.786727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 00:56:00.786736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 00:56:00.786756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 00:56:00.786762 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.786768 | orchestrator | 2025-05-25 00:56:00.786773 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-25 00:56:00.786779 | orchestrator | Sunday 25 May 2025 00:54:49 +0000 (0:00:01.451) 0:05:46.286 ************ 2025-05-25 00:56:00.786785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-25 00:56:00.786790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-25 00:56:00.786800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-25 00:56:00.786806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-25 00:56:00.786812 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.786817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-25 00:56:00.786823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-25 00:56:00.786828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-25 00:56:00.786834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-25 00:56:00.786840 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.786848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-25 00:56:00.786853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-25 00:56:00.786862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-25 00:56:00.786868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-25 00:56:00.786873 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.786878 | orchestrator | 2025-05-25 00:56:00.786884 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-25 00:56:00.786889 | orchestrator | Sunday 25 May 2025 00:54:51 +0000 (0:00:01.300) 0:05:47.587 ************ 2025-05-25 00:56:00.786895 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.786900 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.786906 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.786911 | orchestrator | 2025-05-25 00:56:00.786916 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-25 00:56:00.786922 | orchestrator | Sunday 25 May 2025 00:54:52 +0000 (0:00:00.927) 0:05:48.515 ************ 2025-05-25 00:56:00.786927 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.786932 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.786942 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.786947 | orchestrator | 2025-05-25 00:56:00.786953 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-25 00:56:00.786958 | orchestrator | Sunday 25 May 2025 00:54:53 +0000 (0:00:01.621) 0:05:50.136 ************ 2025-05-25 00:56:00.786963 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.786969 | orchestrator | 2025-05-25 00:56:00.786974 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-25 00:56:00.786979 | orchestrator | Sunday 25 May 2025 00:54:55 +0000 (0:00:01.545) 0:05:51.682 ************ 2025-05-25 00:56:00.786985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 00:56:00.786991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 00:56:00.787003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-25 00:56:00.787009 | orchestrator | 2025-05-25 00:56:00.787015 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-25 00:56:00.787020 | orchestrator | Sunday 25 May 2025 00:54:58 +0000 (0:00:02.815) 0:05:54.497 ************ 2025-05-25 00:56:00.787026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-25 00:56:00.787035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-25 00:56:00.787041 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.787047 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.787052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-25 00:56:00.787058 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.787064 | orchestrator | 2025-05-25 00:56:00.787072 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-25 00:56:00.787077 | orchestrator | Sunday 25 May 2025 00:54:58 +0000 (0:00:00.666) 0:05:55.164 ************ 2025-05-25 00:56:00.787083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-25 00:56:00.787088 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.787096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-25 00:56:00.787102 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.787107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-25 00:56:00.787116 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.787121 | orchestrator | 2025-05-25 00:56:00.787127 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-25 00:56:00.787132 | orchestrator | Sunday 25 May 2025 00:54:59 +0000 (0:00:01.127) 0:05:56.291 ************ 2025-05-25 00:56:00.787138 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.787143 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.787148 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.787154 | orchestrator | 2025-05-25 00:56:00.787159 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-25 00:56:00.787164 | orchestrator | Sunday 25 May 2025 00:55:00 +0000 (0:00:00.714) 0:05:57.006 ************ 2025-05-25 00:56:00.787170 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.787175 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.787180 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.787186 | orchestrator | 2025-05-25 00:56:00.787191 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-25 00:56:00.787196 | orchestrator | Sunday 25 May 2025 00:55:02 +0000 (0:00:01.760) 0:05:58.766 ************ 2025-05-25 00:56:00.787202 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:56:00.787207 | orchestrator | 2025-05-25 00:56:00.787212 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-25 00:56:00.787218 | orchestrator | Sunday 25 May 2025 00:55:04 +0000 (0:00:01.927) 0:06:00.694 ************ 2025-05-25 00:56:00.787224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.787230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.787241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.787251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.787257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.787263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-25 00:56:00.787268 | orchestrator | 2025-05-25 00:56:00.787274 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-25 00:56:00.787279 | orchestrator | Sunday 25 May 2025 00:55:11 +0000 (0:00:07.418) 0:06:08.112 ************ 2025-05-25 00:56:00.787288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.787302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.787308 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.787313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.787319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.787325 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.787333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.787345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-25 00:56:00.787350 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.787356 | orchestrator | 2025-05-25 00:56:00.787361 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-25 00:56:00.787367 | orchestrator | Sunday 25 May 2025 00:55:12 +0000 (0:00:00.913) 0:06:09.026 ************ 2025-05-25 00:56:00.787372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-25 00:56:00.787378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-25 00:56:00.787384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-25 00:56:00.787389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-25 00:56:00.787395 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.787400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-25 00:56:00.787406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-25 00:56:00.787411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-25 00:56:00.787417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-25 00:56:00.787422 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.787428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-25 00:56:00.787437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-25 00:56:00.787443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-25 00:56:00.787448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-25 00:56:00.787454 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.787459 | orchestrator | 2025-05-25 00:56:00.787465 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-25 00:56:00.787473 | orchestrator | Sunday 25 May 2025 00:55:14 +0000 (0:00:01.636) 0:06:10.662 ************ 2025-05-25 00:56:00.787479 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.787484 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.787490 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.787495 | orchestrator | 2025-05-25 00:56:00.787500 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-25 00:56:00.787506 | orchestrator | Sunday 25 May 2025 00:55:15 +0000 (0:00:01.473) 0:06:12.136 ************ 2025-05-25 00:56:00.787511 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.787516 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.787522 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.787527 | orchestrator | 2025-05-25 00:56:00.787535 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-25 00:56:00.787541 | orchestrator | Sunday 25 May 2025 00:55:18 +0000 (0:00:02.467) 0:06:14.603 ************ 2025-05-25 00:56:00.787546 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.787552 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.787557 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.787562 | orchestrator | 2025-05-25 00:56:00.787568 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-25 00:56:00.787573 | orchestrator | Sunday 25 May 2025 00:55:18 +0000 (0:00:00.304) 0:06:14.908 ************ 2025-05-25 00:56:00.787579 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.787584 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.787589 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.787595 | orchestrator | 2025-05-25 00:56:00.787600 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-25 00:56:00.787605 | orchestrator | Sunday 25 May 2025 00:55:18 +0000 (0:00:00.546) 0:06:15.455 ************ 2025-05-25 00:56:00.787611 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.787616 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.787621 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.787627 | orchestrator | 2025-05-25 00:56:00.787632 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-25 00:56:00.787637 | orchestrator | Sunday 25 May 2025 00:55:19 +0000 (0:00:00.576) 0:06:16.032 ************ 2025-05-25 00:56:00.787643 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.787648 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.787653 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.787659 | orchestrator | 2025-05-25 00:56:00.787664 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-25 00:56:00.787670 | orchestrator | Sunday 25 May 2025 00:55:19 +0000 (0:00:00.290) 0:06:16.322 ************ 2025-05-25 00:56:00.787675 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.787680 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.787686 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.787707 | orchestrator | 2025-05-25 00:56:00.787713 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-25 00:56:00.787722 | orchestrator | Sunday 25 May 2025 00:55:20 +0000 (0:00:00.568) 0:06:16.891 ************ 2025-05-25 00:56:00.787728 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.787733 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.787738 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.787744 | orchestrator | 2025-05-25 00:56:00.787749 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-25 00:56:00.787754 | orchestrator | Sunday 25 May 2025 00:55:21 +0000 (0:00:01.016) 0:06:17.907 ************ 2025-05-25 00:56:00.787760 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.787765 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.787770 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.787776 | orchestrator | 2025-05-25 00:56:00.787781 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-25 00:56:00.787787 | orchestrator | Sunday 25 May 2025 00:55:22 +0000 (0:00:00.676) 0:06:18.584 ************ 2025-05-25 00:56:00.787792 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.787797 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.787803 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.787808 | orchestrator | 2025-05-25 00:56:00.787813 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-25 00:56:00.787819 | orchestrator | Sunday 25 May 2025 00:55:22 +0000 (0:00:00.578) 0:06:19.162 ************ 2025-05-25 00:56:00.787824 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.787829 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.787835 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.787840 | orchestrator | 2025-05-25 00:56:00.787846 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-25 00:56:00.787851 | orchestrator | Sunday 25 May 2025 00:55:23 +0000 (0:00:01.216) 0:06:20.378 ************ 2025-05-25 00:56:00.787856 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.787862 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.787867 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.787872 | orchestrator | 2025-05-25 00:56:00.787877 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-25 00:56:00.787883 | orchestrator | Sunday 25 May 2025 00:55:25 +0000 (0:00:01.235) 0:06:21.614 ************ 2025-05-25 00:56:00.787888 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.787893 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.787899 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.787904 | orchestrator | 2025-05-25 00:56:00.787909 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-25 00:56:00.787915 | orchestrator | Sunday 25 May 2025 00:55:26 +0000 (0:00:00.953) 0:06:22.567 ************ 2025-05-25 00:56:00.787920 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.787926 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.787931 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.787936 | orchestrator | 2025-05-25 00:56:00.787942 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-25 00:56:00.787947 | orchestrator | Sunday 25 May 2025 00:55:31 +0000 (0:00:04.973) 0:06:27.540 ************ 2025-05-25 00:56:00.787952 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.787958 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.787963 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.787968 | orchestrator | 2025-05-25 00:56:00.787974 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-25 00:56:00.787982 | orchestrator | Sunday 25 May 2025 00:55:35 +0000 (0:00:04.008) 0:06:31.549 ************ 2025-05-25 00:56:00.787987 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.787993 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.787998 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.788003 | orchestrator | 2025-05-25 00:56:00.788009 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-25 00:56:00.788014 | orchestrator | Sunday 25 May 2025 00:55:46 +0000 (0:00:11.358) 0:06:42.908 ************ 2025-05-25 00:56:00.788023 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.788028 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.788034 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.788039 | orchestrator | 2025-05-25 00:56:00.788044 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-25 00:56:00.788053 | orchestrator | Sunday 25 May 2025 00:55:47 +0000 (0:00:00.719) 0:06:43.627 ************ 2025-05-25 00:56:00.788059 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:56:00.788064 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:56:00.788069 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:56:00.788075 | orchestrator | 2025-05-25 00:56:00.788080 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-25 00:56:00.788086 | orchestrator | Sunday 25 May 2025 00:55:51 +0000 (0:00:04.318) 0:06:47.945 ************ 2025-05-25 00:56:00.788091 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.788096 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.788102 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.788107 | orchestrator | 2025-05-25 00:56:00.788112 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-25 00:56:00.788118 | orchestrator | Sunday 25 May 2025 00:55:52 +0000 (0:00:00.586) 0:06:48.532 ************ 2025-05-25 00:56:00.788123 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.788129 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.788134 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.788139 | orchestrator | 2025-05-25 00:56:00.788145 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-25 00:56:00.788150 | orchestrator | Sunday 25 May 2025 00:55:52 +0000 (0:00:00.318) 0:06:48.851 ************ 2025-05-25 00:56:00.788155 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.788161 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.788166 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.788171 | orchestrator | 2025-05-25 00:56:00.788177 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-25 00:56:00.788182 | orchestrator | Sunday 25 May 2025 00:55:52 +0000 (0:00:00.607) 0:06:49.458 ************ 2025-05-25 00:56:00.788188 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.788193 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.788198 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.788204 | orchestrator | 2025-05-25 00:56:00.788209 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-25 00:56:00.788214 | orchestrator | Sunday 25 May 2025 00:55:53 +0000 (0:00:00.591) 0:06:50.050 ************ 2025-05-25 00:56:00.788220 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.788225 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.788230 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.788236 | orchestrator | 2025-05-25 00:56:00.788241 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-25 00:56:00.788247 | orchestrator | Sunday 25 May 2025 00:55:54 +0000 (0:00:00.583) 0:06:50.633 ************ 2025-05-25 00:56:00.788252 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:56:00.788257 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:56:00.788262 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:56:00.788268 | orchestrator | 2025-05-25 00:56:00.788273 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-25 00:56:00.788279 | orchestrator | Sunday 25 May 2025 00:55:54 +0000 (0:00:00.326) 0:06:50.960 ************ 2025-05-25 00:56:00.788284 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.788290 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.788295 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.788300 | orchestrator | 2025-05-25 00:56:00.788306 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-25 00:56:00.788311 | orchestrator | Sunday 25 May 2025 00:55:57 +0000 (0:00:03.322) 0:06:54.282 ************ 2025-05-25 00:56:00.788316 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:00.788325 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:00.788330 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:00.788336 | orchestrator | 2025-05-25 00:56:00.788341 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:56:00.788347 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-25 00:56:00.788352 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-25 00:56:00.788358 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-25 00:56:00.788363 | orchestrator | 2025-05-25 00:56:00.788369 | orchestrator | 2025-05-25 00:56:00.788374 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:56:00.788379 | orchestrator | Sunday 25 May 2025 00:55:58 +0000 (0:00:01.132) 0:06:55.414 ************ 2025-05-25 00:56:00.788385 | orchestrator | =============================================================================== 2025-05-25 00:56:00.788390 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.36s 2025-05-25 00:56:00.788396 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.42s 2025-05-25 00:56:00.788401 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 6.94s 2025-05-25 00:56:00.788406 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.22s 2025-05-25 00:56:00.788412 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.25s 2025-05-25 00:56:00.788420 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.08s 2025-05-25 00:56:00.788425 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.97s 2025-05-25 00:56:00.788431 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.97s 2025-05-25 00:56:00.788436 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.81s 2025-05-25 00:56:00.788442 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.77s 2025-05-25 00:56:00.788447 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.72s 2025-05-25 00:56:00.788455 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.71s 2025-05-25 00:56:00.788460 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.64s 2025-05-25 00:56:00.788466 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.44s 2025-05-25 00:56:00.788471 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.41s 2025-05-25 00:56:00.788476 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.35s 2025-05-25 00:56:00.788482 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.34s 2025-05-25 00:56:00.788487 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.32s 2025-05-25 00:56:00.788492 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.22s 2025-05-25 00:56:00.788498 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.03s 2025-05-25 00:56:00.788503 | orchestrator | 2025-05-25 00:56:00 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:00.788509 | orchestrator | 2025-05-25 00:56:00 | INFO  | Task bf657650-7137-4820-9383-f2146f07434e is in state STARTED 2025-05-25 00:56:00.788514 | orchestrator | 2025-05-25 00:56:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:00.788520 | orchestrator | 2025-05-25 00:56:00 | INFO  | Task 7c35bbda-7106-4936-8a63-3e06a9b50752 is in state STARTED 2025-05-25 00:56:00.788525 | orchestrator | 2025-05-25 00:56:00 | INFO  | Task 175ec966-750e-4ed8-834b-a00a754d0340 is in state STARTED 2025-05-25 00:56:00.788536 | orchestrator | 2025-05-25 00:56:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:03.828114 | orchestrator | 2025-05-25 00:56:03 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:03.828222 | orchestrator | 2025-05-25 00:56:03 | INFO  | Task bf657650-7137-4820-9383-f2146f07434e is in state STARTED 2025-05-25 00:56:03.828610 | orchestrator | 2025-05-25 00:56:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:03.828895 | orchestrator | 2025-05-25 00:56:03 | INFO  | Task 7c35bbda-7106-4936-8a63-3e06a9b50752 is in state STARTED 2025-05-25 00:56:03.829366 | orchestrator | 2025-05-25 00:56:03 | INFO  | Task 175ec966-750e-4ed8-834b-a00a754d0340 is in state STARTED 2025-05-25 00:56:03.829386 | orchestrator | 2025-05-25 00:56:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:06.862536 | orchestrator | 2025-05-25 00:56:06.862646 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2025-05-25 00:56:06.862662 | orchestrator | -vvvv to see details 2025-05-25 00:56:06.862675 | orchestrator | 2025-05-25 00:56:06.862736 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-25 00:56:06.862750 | orchestrator | 2025-05-25 00:56:06.862761 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-25 00:56:06.862772 | orchestrator | ok: [localhost] => { 2025-05-25 00:56:06.862783 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-25 00:56:06.862795 | orchestrator | } 2025-05-25 00:56:06.862806 | orchestrator | 2025-05-25 00:56:06.862817 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-25 00:56:06.862828 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-25 00:56:06.862841 | orchestrator | ...ignoring 2025-05-25 00:56:06.862852 | orchestrator | 2025-05-25 00:56:06.862863 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-25 00:56:06.862874 | orchestrator | skipping: [localhost] 2025-05-25 00:56:06.862886 | orchestrator | 2025-05-25 00:56:06.862896 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-25 00:56:06.862907 | orchestrator | ok: [localhost] 2025-05-25 00:56:06.862918 | orchestrator | 2025-05-25 00:56:06.862929 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:56:06.862940 | orchestrator | 2025-05-25 00:56:06.862951 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:56:06.862962 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:56:06.862973 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:56:06.862983 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:56:06.862994 | orchestrator | 2025-05-25 00:56:06.863005 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:56:06.863016 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-25 00:56:06.863027 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-25 00:56:06.863038 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-25 00:56:06.863049 | orchestrator | 2025-05-25 00:56:06.863060 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-25 00:56:06.863073 | orchestrator | 2025-05-25 00:56:06.863103 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-25 00:56:06.863117 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 00:56:06.863129 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-25 00:56:06.863142 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-25 00:56:06.863155 | orchestrator | 2025-05-25 00:56:06.863168 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-25 00:56:06.863203 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-05-25 00:56:06.863217 | orchestrator | 2025-05-25 00:56:06.863230 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-25 00:56:06.863269 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"msg": "{'mariadb': {'container_name': 'mariadb', 'group': '{{ mariadb_shard_group }}', 'enabled': True, 'image': '{{ mariadb_image_full }}', 'volumes': '{{ mariadb_default_volumes + mariadb_extra_volumes }}', 'dimensions': '{{ mariadb_dimensions }}', 'healthcheck': '{{ mariadb_healthcheck }}', 'environment': {'MYSQL_USERNAME': '{{ mariadb_monitor_user }}', 'MYSQL_PASSWORD': '{% if enable_proxysql | bool %}{{ mariadb_monitor_password }}{% endif %}', 'MYSQL_HOST': '{{ api_interface_address }}', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': '{{ enable_mariadb | bool and not enable_external_mariadb_load_balancer | bool }}', 'mode': 'tcp', 'port': '{{ database_port }}', 'listen_port': '{{ mariadb_port }}', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', '{% if enable_mariadb_clustercheck | bool %}option httpchk{% endif %}'], 'custom_member_list': \"{{ internal_haproxy_members.split(';') }}\"}, 'mariadb_external_lb': {'enabled': '{{ enable_external_mariadb_load_balancer | bool }}', 'mode': 'tcp', 'port': '{{ database_port }}', 'listen_port': '{{ mariadb_port }}', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': \"{{ external_haproxy_members.split(';') }}\"}}}, 'mariadb-clustercheck': {'container_name': 'mariadb_clustercheck', 'group': '{{ mariadb_shard_group }}', 'enabled': '{{ enable_mariadb_clustercheck | bool }}', 'image': '{{ mariadb_clustercheck_image_full }}', 'volumes': '{{ mariadb_clustercheck_default_volumes + mariadb_clustercheck_extra_volumes }}', 'dimensions': '{{ mariadb_clustercheck_dimensions }}', 'environment': {'MYSQL_USERNAME': '{{ mariadb_monitor_user }}', 'MYSQL_PASSWORD': '{% if enable_proxysql | bool %}{{ mariadb_monitor_password }}{% endif %}', 'MYSQL_HOST': '{{ api_interface_address }}', 'AVAILABLE_WHEN_DONOR': '1'}}}: ['{{ node_config_directory }}/mariadb/:{{ container_config_directory }}/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', \"{{ '/etc/timezone:/etc/timezone:ro' if ansible_facts.os_family == 'Debian' else '' }}\", '{{ mariadb_datadir_volume }}:/var/lib/mysql', 'kolla_logs:/var/log/kolla/']: 'dict object' has no attribute 'os_family'"} 2025-05-25 00:56:06.863293 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"msg": "{'mariadb': {'container_name': 'mariadb', 'group': '{{ mariadb_shard_group }}', 'enabled': True, 'image': '{{ mariadb_image_full }}', 'volumes': '{{ mariadb_default_volumes + mariadb_extra_volumes }}', 'dimensions': '{{ mariadb_dimensions }}', 'healthcheck': '{{ mariadb_healthcheck }}', 'environment': {'MYSQL_USERNAME': '{{ mariadb_monitor_user }}', 'MYSQL_PASSWORD': '{% if enable_proxysql | bool %}{{ mariadb_monitor_password }}{% endif %}', 'MYSQL_HOST': '{{ api_interface_address }}', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': '{{ enable_mariadb | bool and not enable_external_mariadb_load_balancer | bool }}', 'mode': 'tcp', 'port': '{{ database_port }}', 'listen_port': '{{ mariadb_port }}', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', '{% if enable_mariadb_clustercheck | bool %}option httpchk{% endif %}'], 'custom_member_list': \"{{ internal_haproxy_members.split(';') }}\"}, 'mariadb_external_lb': {'enabled': '{{ enable_external_mariadb_load_balancer | bool }}', 'mode': 'tcp', 'port': '{{ database_port }}', 'listen_port': '{{ mariadb_port }}', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': \"{{ external_haproxy_members.split(';') }}\"}}}, 'mariadb-clustercheck': {'container_name': 'mariadb_clustercheck', 'group': '{{ mariadb_shard_group }}', 'enabled': '{{ enable_mariadb_clustercheck | bool }}', 'image': '{{ mariadb_clustercheck_image_full }}', 'volumes': '{{ mariadb_clustercheck_default_volumes + mariadb_clustercheck_extra_volumes }}', 'dimensions': '{{ mariadb_clustercheck_dimensions }}', 'environment': {'MYSQL_USERNAME': '{{ mariadb_monitor_user }}', 'MYSQL_PASSWORD': '{% if enable_proxysql | bool %}{{ mariadb_monitor_password }}{% endif %}', 'MYSQL_HOST': '{{ api_interface_address }}', 'AVAILABLE_WHEN_DONOR': '1'}}}: ['{{ node_config_directory }}/mariadb/:{{ container_config_directory }}/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', \"{{ '/etc/timezone:/etc/timezone:ro' if ansible_facts.os_family == 'Debian' else '' }}\", '{{ mariadb_datadir_volume }}:/var/lib/mysql', 'kolla_logs:/var/log/kolla/']: 'dict object' has no attribute 'os_family'"} 2025-05-25 00:56:06.863324 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"msg": "{'mariadb': {'container_name': 'mariadb', 'group': '{{ mariadb_shard_group }}', 'enabled': True, 'image': '{{ mariadb_image_full }}', 'volumes': '{{ mariadb_default_volumes + mariadb_extra_volumes }}', 'dimensions': '{{ mariadb_dimensions }}', 'healthcheck': '{{ mariadb_healthcheck }}', 'environment': {'MYSQL_USERNAME': '{{ mariadb_monitor_user }}', 'MYSQL_PASSWORD': '{% if enable_proxysql | bool %}{{ mariadb_monitor_password }}{% endif %}', 'MYSQL_HOST': '{{ api_interface_address }}', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': '{{ enable_mariadb | bool and not enable_external_mariadb_load_balancer | bool }}', 'mode': 'tcp', 'port': '{{ database_port }}', 'listen_port': '{{ mariadb_port }}', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', '{% if enable_mariadb_clustercheck | bool %}option httpchk{% endif %}'], 'custom_member_list': \"{{ internal_haproxy_members.split(';') }}\"}, 'mariadb_external_lb': {'enabled': '{{ enable_external_mariadb_load_balancer | bool }}', 'mode': 'tcp', 'port': '{{ database_port }}', 'listen_port': '{{ mariadb_port }}', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': \"{{ external_haproxy_members.split(';') }}\"}}}, 'mariadb-clustercheck': {'container_name': 'mariadb_clustercheck', 'group': '{{ mariadb_shard_group }}', 'enabled': '{{ enable_mariadb_clustercheck | bool }}', 'image': '{{ mariadb_clustercheck_image_full }}', 'volumes': '{{ mariadb_clustercheck_default_volumes + mariadb_clustercheck_extra_volumes }}', 'dimensions': '{{ mariadb_clustercheck_dimensions }}', 'environment': {'MYSQL_USERNAME': '{{ mariadb_monitor_user }}', 'MYSQL_PASSWORD': '{% if enable_proxysql | bool %}{{ mariadb_monitor_password }}{% endif %}', 'MYSQL_HOST': '{{ api_interface_address }}', 'AVAILABLE_WHEN_DONOR': '1'}}}: ['{{ node_config_directory }}/mariadb/:{{ container_config_directory }}/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', \"{{ '/etc/timezone:/etc/timezone:ro' if ansible_facts.os_family == 'Debian' else '' }}\", '{{ mariadb_datadir_volume }}:/var/lib/mysql', 'kolla_logs:/var/log/kolla/']: 'dict object' has no attribute 'os_family'"} 2025-05-25 00:56:06.863337 | orchestrator | 2025-05-25 00:56:06.863349 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:56:06.863361 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-25 00:56:06.863372 | orchestrator | testbed-node-0 : ok=4  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-25 00:56:06.863395 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-25 00:56:06.863413 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-25 00:56:06.863424 | orchestrator | 2025-05-25 00:56:06.863435 | orchestrator | 2025-05-25 00:56:06 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:06.863446 | orchestrator | 2025-05-25 00:56:06 | INFO  | Task bf657650-7137-4820-9383-f2146f07434e is in state SUCCESS 2025-05-25 00:56:06.863457 | orchestrator | 2025-05-25 00:56:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:06.863468 | orchestrator | 2025-05-25 00:56:06 | INFO  | Task 7c35bbda-7106-4936-8a63-3e06a9b50752 is in state SUCCESS 2025-05-25 00:56:06.863483 | orchestrator | 2025-05-25 00:56:06 | INFO  | Task 175ec966-750e-4ed8-834b-a00a754d0340 is in state SUCCESS 2025-05-25 00:56:06.863503 | orchestrator | 2025-05-25 00:56:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:09.911423 | orchestrator | 2025-05-25 00:56:09 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:09.913130 | orchestrator | 2025-05-25 00:56:09 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:09.913943 | orchestrator | 2025-05-25 00:56:09 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:09.914619 | orchestrator | 2025-05-25 00:56:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:09.914646 | orchestrator | 2025-05-25 00:56:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:12.965259 | orchestrator | 2025-05-25 00:56:12 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:12.965371 | orchestrator | 2025-05-25 00:56:12 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:12.965735 | orchestrator | 2025-05-25 00:56:12 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:12.966215 | orchestrator | 2025-05-25 00:56:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:12.966242 | orchestrator | 2025-05-25 00:56:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:15.992217 | orchestrator | 2025-05-25 00:56:15 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:15.993089 | orchestrator | 2025-05-25 00:56:15 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:15.994487 | orchestrator | 2025-05-25 00:56:15 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:16.007250 | orchestrator | 2025-05-25 00:56:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:16.008383 | orchestrator | 2025-05-25 00:56:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:19.043202 | orchestrator | 2025-05-25 00:56:19 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:19.043736 | orchestrator | 2025-05-25 00:56:19 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:19.044659 | orchestrator | 2025-05-25 00:56:19 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:19.045511 | orchestrator | 2025-05-25 00:56:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:19.045722 | orchestrator | 2025-05-25 00:56:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:22.083874 | orchestrator | 2025-05-25 00:56:22 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:22.087501 | orchestrator | 2025-05-25 00:56:22 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:22.088792 | orchestrator | 2025-05-25 00:56:22 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:22.089561 | orchestrator | 2025-05-25 00:56:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:22.089793 | orchestrator | 2025-05-25 00:56:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:25.142095 | orchestrator | 2025-05-25 00:56:25 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:25.143412 | orchestrator | 2025-05-25 00:56:25 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:25.144533 | orchestrator | 2025-05-25 00:56:25 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:25.146128 | orchestrator | 2025-05-25 00:56:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:25.146175 | orchestrator | 2025-05-25 00:56:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:28.185200 | orchestrator | 2025-05-25 00:56:28 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:28.186912 | orchestrator | 2025-05-25 00:56:28 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:28.187896 | orchestrator | 2025-05-25 00:56:28 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:28.188813 | orchestrator | 2025-05-25 00:56:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:28.188940 | orchestrator | 2025-05-25 00:56:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:31.226779 | orchestrator | 2025-05-25 00:56:31 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:31.227231 | orchestrator | 2025-05-25 00:56:31 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:31.228265 | orchestrator | 2025-05-25 00:56:31 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:31.231784 | orchestrator | 2025-05-25 00:56:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:31.231899 | orchestrator | 2025-05-25 00:56:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:34.275398 | orchestrator | 2025-05-25 00:56:34 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:34.275876 | orchestrator | 2025-05-25 00:56:34 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:34.276899 | orchestrator | 2025-05-25 00:56:34 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:34.277532 | orchestrator | 2025-05-25 00:56:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:34.277572 | orchestrator | 2025-05-25 00:56:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:37.322958 | orchestrator | 2025-05-25 00:56:37 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:37.323074 | orchestrator | 2025-05-25 00:56:37 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:37.323102 | orchestrator | 2025-05-25 00:56:37 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:37.323718 | orchestrator | 2025-05-25 00:56:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:37.324062 | orchestrator | 2025-05-25 00:56:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:40.359175 | orchestrator | 2025-05-25 00:56:40 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:40.359952 | orchestrator | 2025-05-25 00:56:40 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:40.360412 | orchestrator | 2025-05-25 00:56:40 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:40.361442 | orchestrator | 2025-05-25 00:56:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:40.361469 | orchestrator | 2025-05-25 00:56:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:43.394103 | orchestrator | 2025-05-25 00:56:43 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:43.394212 | orchestrator | 2025-05-25 00:56:43 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:43.394609 | orchestrator | 2025-05-25 00:56:43 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:43.395837 | orchestrator | 2025-05-25 00:56:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:43.395890 | orchestrator | 2025-05-25 00:56:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:46.432731 | orchestrator | 2025-05-25 00:56:46 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:46.434346 | orchestrator | 2025-05-25 00:56:46 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:46.436090 | orchestrator | 2025-05-25 00:56:46 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:46.436802 | orchestrator | 2025-05-25 00:56:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:46.436828 | orchestrator | 2025-05-25 00:56:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:49.475222 | orchestrator | 2025-05-25 00:56:49 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:49.477562 | orchestrator | 2025-05-25 00:56:49 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:49.479997 | orchestrator | 2025-05-25 00:56:49 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:49.482327 | orchestrator | 2025-05-25 00:56:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:49.482847 | orchestrator | 2025-05-25 00:56:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:52.526320 | orchestrator | 2025-05-25 00:56:52 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:52.528010 | orchestrator | 2025-05-25 00:56:52 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:52.530146 | orchestrator | 2025-05-25 00:56:52 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:52.532083 | orchestrator | 2025-05-25 00:56:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:52.532113 | orchestrator | 2025-05-25 00:56:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:55.578412 | orchestrator | 2025-05-25 00:56:55 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:55.579562 | orchestrator | 2025-05-25 00:56:55 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:55.582213 | orchestrator | 2025-05-25 00:56:55 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:55.583958 | orchestrator | 2025-05-25 00:56:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:55.584048 | orchestrator | 2025-05-25 00:56:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:56:58.639189 | orchestrator | 2025-05-25 00:56:58 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:56:58.639311 | orchestrator | 2025-05-25 00:56:58 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state STARTED 2025-05-25 00:56:58.639341 | orchestrator | 2025-05-25 00:56:58 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:56:58.640458 | orchestrator | 2025-05-25 00:56:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:56:58.640489 | orchestrator | 2025-05-25 00:56:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:01.692765 | orchestrator | 2025-05-25 00:57:01 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:01.694806 | orchestrator | 2025-05-25 00:57:01 | INFO  | Task c5f621a3-280e-4d7d-a692-3d9e235f22b2 is in state SUCCESS 2025-05-25 00:57:01.697250 | orchestrator | 2025-05-25 00:57:01.697304 | orchestrator | None 2025-05-25 00:57:01.697413 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2025-05-25 00:57:01.697434 | orchestrator | -vvvv to see details 2025-05-25 00:57:01.697448 | orchestrator | 2025-05-25 00:57:01.697467 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:57:01.697487 | orchestrator | 2025-05-25 00:57:01.697515 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:57:01.697537 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.697555 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.697573 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.697591 | orchestrator | 2025-05-25 00:57:01.697955 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:57:01.697970 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-25 00:57:01.697981 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-25 00:57:01.697992 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-25 00:57:01.698004 | orchestrator | 2025-05-25 00:57:01.698073 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-25 00:57:01.698088 | orchestrator | 2025-05-25 00:57:01.698099 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-25 00:57:01.698111 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:57:01.698122 | orchestrator | 2025-05-25 00:57:01.698133 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-25 00:57:01.698155 | orchestrator | failed: [testbed-node-2] (item={'name': 'vm.max_map_count', 'value': 262144}) => {"ansible_loop_var": "item", "item": {"name": "vm.max_map_count", "value": 262144}, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2025-05-25 00:57:01.698172 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"ansible_loop_var": "item", "item": {"name": "vm.max_map_count", "value": 262144}, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true}], "unreachable": true} 2025-05-25 00:57:01.698186 | orchestrator | failed: [testbed-node-1] (item={'name': 'vm.max_map_count', 'value': 262144}) => {"ansible_loop_var": "item", "item": {"name": "vm.max_map_count", "value": 262144}, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2025-05-25 00:57:01.698217 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"ansible_loop_var": "item", "item": {"name": "vm.max_map_count", "value": 262144}, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true}], "unreachable": true} 2025-05-25 00:57:01.698230 | orchestrator | failed: [testbed-node-0] (item={'name': 'vm.max_map_count', 'value': 262144}) => {"ansible_loop_var": "item", "item": {"name": "vm.max_map_count", "value": 262144}, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2025-05-25 00:57:01.698241 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"ansible_loop_var": "item", "item": {"name": "vm.max_map_count", "value": 262144}, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true}], "unreachable": true} 2025-05-25 00:57:01.698252 | orchestrator | 2025-05-25 00:57:01.698263 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:57:01.698275 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:57:01.698394 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:57:01.698407 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=1  failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:57:01.698418 | orchestrator | 2025-05-25 00:57:01.698429 | orchestrator | 2025-05-25 00:57:01.698440 | orchestrator | 2025-05-25 00:57:01.698451 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:57:01.698462 | orchestrator | 2025-05-25 00:57:01.698473 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:57:01.698484 | orchestrator | Sunday 25 May 2025 00:56:09 +0000 (0:00:00.319) 0:00:00.319 ************ 2025-05-25 00:57:01.698495 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.698505 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.698516 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.698527 | orchestrator | 2025-05-25 00:57:01.698537 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:57:01.698548 | orchestrator | Sunday 25 May 2025 00:56:09 +0000 (0:00:00.453) 0:00:00.772 ************ 2025-05-25 00:57:01.698632 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-25 00:57:01.698645 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-25 00:57:01.698656 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-25 00:57:01.698666 | orchestrator | 2025-05-25 00:57:01.698701 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-25 00:57:01.698712 | orchestrator | 2025-05-25 00:57:01.698723 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-25 00:57:01.698734 | orchestrator | Sunday 25 May 2025 00:56:10 +0000 (0:00:00.346) 0:00:01.119 ************ 2025-05-25 00:57:01.698756 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:57:01.698768 | orchestrator | 2025-05-25 00:57:01.698779 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-25 00:57:01.698796 | orchestrator | Sunday 25 May 2025 00:56:11 +0000 (0:00:00.828) 0:00:01.948 ************ 2025-05-25 00:57:01.698813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 00:57:01.698849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 00:57:01.698872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 00:57:01.698884 | orchestrator | 2025-05-25 00:57:01.698896 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-25 00:57:01.698907 | orchestrator | Sunday 25 May 2025 00:56:12 +0000 (0:00:01.715) 0:00:03.663 ************ 2025-05-25 00:57:01.698918 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.698935 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.698946 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.698957 | orchestrator | 2025-05-25 00:57:01.698968 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-25 00:57:01.698979 | orchestrator | Sunday 25 May 2025 00:56:13 +0000 (0:00:00.230) 0:00:03.894 ************ 2025-05-25 00:57:01.698990 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-25 00:57:01.699001 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-25 00:57:01.699011 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-25 00:57:01.699022 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-25 00:57:01.699039 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-25 00:57:01.699050 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-25 00:57:01.699061 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-25 00:57:01.699071 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-25 00:57:01.699082 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-25 00:57:01.699093 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-25 00:57:01.699103 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-25 00:57:01.699114 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-25 00:57:01.699124 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-25 00:57:01.699140 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-25 00:57:01.699151 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-25 00:57:01.699162 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-25 00:57:01.699173 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-25 00:57:01.699183 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-25 00:57:01.699194 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-25 00:57:01.699205 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-25 00:57:01.699215 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-25 00:57:01.699227 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-25 00:57:01.699239 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-25 00:57:01.699251 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-25 00:57:01.699263 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-25 00:57:01.699276 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-25 00:57:01.699290 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-25 00:57:01.699303 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-25 00:57:01.699316 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-25 00:57:01.699328 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-25 00:57:01.699341 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-25 00:57:01.699354 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-25 00:57:01.699372 | orchestrator | 2025-05-25 00:57:01.699385 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 00:57:01.699397 | orchestrator | Sunday 25 May 2025 00:56:13 +0000 (0:00:00.744) 0:00:04.639 ************ 2025-05-25 00:57:01.699410 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.699422 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.699434 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.699447 | orchestrator | 2025-05-25 00:57:01.699466 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 00:57:01.699479 | orchestrator | Sunday 25 May 2025 00:56:14 +0000 (0:00:00.328) 0:00:04.967 ************ 2025-05-25 00:57:01.699492 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.699504 | orchestrator | 2025-05-25 00:57:01.699517 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 00:57:01.699529 | orchestrator | Sunday 25 May 2025 00:56:14 +0000 (0:00:00.094) 0:00:05.062 ************ 2025-05-25 00:57:01.699542 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.699554 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.699567 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.699579 | orchestrator | 2025-05-25 00:57:01.699591 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 00:57:01.699604 | orchestrator | Sunday 25 May 2025 00:56:14 +0000 (0:00:00.326) 0:00:05.388 ************ 2025-05-25 00:57:01.699616 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.699627 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.699638 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.699649 | orchestrator | 2025-05-25 00:57:01.699660 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 00:57:01.699687 | orchestrator | Sunday 25 May 2025 00:56:14 +0000 (0:00:00.248) 0:00:05.637 ************ 2025-05-25 00:57:01.699698 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.699709 | orchestrator | 2025-05-25 00:57:01.699720 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 00:57:01.699731 | orchestrator | Sunday 25 May 2025 00:56:14 +0000 (0:00:00.107) 0:00:05.744 ************ 2025-05-25 00:57:01.699741 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.699752 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.699763 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.699774 | orchestrator | 2025-05-25 00:57:01.699784 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 00:57:01.699795 | orchestrator | Sunday 25 May 2025 00:56:15 +0000 (0:00:00.431) 0:00:06.175 ************ 2025-05-25 00:57:01.699811 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.699822 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.699833 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.699844 | orchestrator | 2025-05-25 00:57:01.699854 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 00:57:01.699865 | orchestrator | Sunday 25 May 2025 00:56:15 +0000 (0:00:00.438) 0:00:06.614 ************ 2025-05-25 00:57:01.699876 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.699887 | orchestrator | 2025-05-25 00:57:01.699897 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 00:57:01.699908 | orchestrator | Sunday 25 May 2025 00:56:15 +0000 (0:00:00.106) 0:00:06.720 ************ 2025-05-25 00:57:01.699919 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.699930 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.699940 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.699951 | orchestrator | 2025-05-25 00:57:01.699962 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 00:57:01.699972 | orchestrator | Sunday 25 May 2025 00:56:16 +0000 (0:00:00.309) 0:00:07.030 ************ 2025-05-25 00:57:01.699983 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.699994 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.700005 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.700015 | orchestrator | 2025-05-25 00:57:01.700026 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 00:57:01.700043 | orchestrator | Sunday 25 May 2025 00:56:16 +0000 (0:00:00.342) 0:00:07.373 ************ 2025-05-25 00:57:01.700054 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.700065 | orchestrator | 2025-05-25 00:57:01.700076 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 00:57:01.700086 | orchestrator | Sunday 25 May 2025 00:56:16 +0000 (0:00:00.105) 0:00:07.479 ************ 2025-05-25 00:57:01.700097 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.700108 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.700119 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.700129 | orchestrator | 2025-05-25 00:57:01.700140 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 00:57:01.700151 | orchestrator | Sunday 25 May 2025 00:56:16 +0000 (0:00:00.297) 0:00:07.776 ************ 2025-05-25 00:57:01.700162 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.700172 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.700183 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.700194 | orchestrator | 2025-05-25 00:57:01.700205 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 00:57:01.700216 | orchestrator | Sunday 25 May 2025 00:56:17 +0000 (0:00:00.277) 0:00:08.053 ************ 2025-05-25 00:57:01.700226 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.700237 | orchestrator | 2025-05-25 00:57:01.700248 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 00:57:01.700259 | orchestrator | Sunday 25 May 2025 00:56:17 +0000 (0:00:00.158) 0:00:08.212 ************ 2025-05-25 00:57:01.700270 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.700280 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.700291 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.700302 | orchestrator | 2025-05-25 00:57:01.700312 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 00:57:01.700323 | orchestrator | Sunday 25 May 2025 00:56:17 +0000 (0:00:00.255) 0:00:08.467 ************ 2025-05-25 00:57:01.700334 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.700345 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.700356 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.700366 | orchestrator | 2025-05-25 00:57:01.700377 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 00:57:01.700388 | orchestrator | Sunday 25 May 2025 00:56:18 +0000 (0:00:00.412) 0:00:08.879 ************ 2025-05-25 00:57:01.700399 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.700409 | orchestrator | 2025-05-25 00:57:01.700420 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 00:57:01.700431 | orchestrator | Sunday 25 May 2025 00:56:18 +0000 (0:00:00.150) 0:00:09.029 ************ 2025-05-25 00:57:01.700442 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.700459 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.700470 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.700480 | orchestrator | 2025-05-25 00:57:01.700491 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 00:57:01.700502 | orchestrator | Sunday 25 May 2025 00:56:18 +0000 (0:00:00.494) 0:00:09.524 ************ 2025-05-25 00:57:01.700513 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.700523 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.700534 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.700545 | orchestrator | 2025-05-25 00:57:01.700556 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 00:57:01.700567 | orchestrator | Sunday 25 May 2025 00:56:19 +0000 (0:00:00.474) 0:00:09.998 ************ 2025-05-25 00:57:01.700577 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.700588 | orchestrator | 2025-05-25 00:57:01.700599 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 00:57:01.700610 | orchestrator | Sunday 25 May 2025 00:56:19 +0000 (0:00:00.185) 0:00:10.184 ************ 2025-05-25 00:57:01.700627 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.700638 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.700649 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.700660 | orchestrator | 2025-05-25 00:57:01.700742 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 00:57:01.700756 | orchestrator | Sunday 25 May 2025 00:56:19 +0000 (0:00:00.418) 0:00:10.602 ************ 2025-05-25 00:57:01.700766 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.700785 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.700804 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.700831 | orchestrator | 2025-05-25 00:57:01.700855 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 00:57:01.700873 | orchestrator | Sunday 25 May 2025 00:56:20 +0000 (0:00:00.304) 0:00:10.907 ************ 2025-05-25 00:57:01.700891 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.700909 | orchestrator | 2025-05-25 00:57:01.700925 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 00:57:01.700952 | orchestrator | Sunday 25 May 2025 00:56:20 +0000 (0:00:00.244) 0:00:11.152 ************ 2025-05-25 00:57:01.700970 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.700985 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.701000 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.701017 | orchestrator | 2025-05-25 00:57:01.701032 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 00:57:01.701050 | orchestrator | Sunday 25 May 2025 00:56:20 +0000 (0:00:00.377) 0:00:11.529 ************ 2025-05-25 00:57:01.701066 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.701083 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.701100 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.701112 | orchestrator | 2025-05-25 00:57:01.701122 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 00:57:01.701131 | orchestrator | Sunday 25 May 2025 00:56:21 +0000 (0:00:00.464) 0:00:11.994 ************ 2025-05-25 00:57:01.701141 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.701150 | orchestrator | 2025-05-25 00:57:01.701160 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 00:57:01.701170 | orchestrator | Sunday 25 May 2025 00:56:21 +0000 (0:00:00.149) 0:00:12.144 ************ 2025-05-25 00:57:01.701179 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.701188 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.701198 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.701207 | orchestrator | 2025-05-25 00:57:01.701217 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 00:57:01.701226 | orchestrator | Sunday 25 May 2025 00:56:21 +0000 (0:00:00.446) 0:00:12.590 ************ 2025-05-25 00:57:01.701236 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.701245 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.701255 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.701264 | orchestrator | 2025-05-25 00:57:01.701274 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 00:57:01.701283 | orchestrator | Sunday 25 May 2025 00:56:22 +0000 (0:00:00.427) 0:00:13.017 ************ 2025-05-25 00:57:01.701292 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.701302 | orchestrator | 2025-05-25 00:57:01.701311 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 00:57:01.701321 | orchestrator | Sunday 25 May 2025 00:56:22 +0000 (0:00:00.137) 0:00:13.155 ************ 2025-05-25 00:57:01.701330 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.701340 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.701349 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.701359 | orchestrator | 2025-05-25 00:57:01.701368 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-25 00:57:01.701378 | orchestrator | Sunday 25 May 2025 00:56:22 +0000 (0:00:00.416) 0:00:13.572 ************ 2025-05-25 00:57:01.701387 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:01.701413 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:01.701423 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:01.701432 | orchestrator | 2025-05-25 00:57:01.701442 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-25 00:57:01.701454 | orchestrator | Sunday 25 May 2025 00:56:23 +0000 (0:00:00.429) 0:00:14.001 ************ 2025-05-25 00:57:01.701471 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.701498 | orchestrator | 2025-05-25 00:57:01.701514 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-25 00:57:01.701529 | orchestrator | Sunday 25 May 2025 00:56:23 +0000 (0:00:00.124) 0:00:14.126 ************ 2025-05-25 00:57:01.701546 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.701563 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.701578 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.701595 | orchestrator | 2025-05-25 00:57:01.701605 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-25 00:57:01.701615 | orchestrator | Sunday 25 May 2025 00:56:23 +0000 (0:00:00.663) 0:00:14.790 ************ 2025-05-25 00:57:01.701625 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:57:01.701634 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:57:01.701643 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:57:01.701653 | orchestrator | 2025-05-25 00:57:01.701700 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-25 00:57:01.701711 | orchestrator | Sunday 25 May 2025 00:56:27 +0000 (0:00:03.251) 0:00:18.041 ************ 2025-05-25 00:57:01.701721 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-25 00:57:01.701730 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-25 00:57:01.701740 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-25 00:57:01.701749 | orchestrator | 2025-05-25 00:57:01.701759 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-25 00:57:01.701769 | orchestrator | Sunday 25 May 2025 00:56:30 +0000 (0:00:03.095) 0:00:21.137 ************ 2025-05-25 00:57:01.701778 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-25 00:57:01.701788 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-25 00:57:01.701798 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-25 00:57:01.701807 | orchestrator | 2025-05-25 00:57:01.701817 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-25 00:57:01.701826 | orchestrator | Sunday 25 May 2025 00:56:33 +0000 (0:00:03.191) 0:00:24.329 ************ 2025-05-25 00:57:01.701836 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-25 00:57:01.701846 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-25 00:57:01.701855 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-25 00:57:01.701864 | orchestrator | 2025-05-25 00:57:01.701880 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-25 00:57:01.701890 | orchestrator | Sunday 25 May 2025 00:56:36 +0000 (0:00:02.624) 0:00:26.954 ************ 2025-05-25 00:57:01.701900 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.701909 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.701919 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.701928 | orchestrator | 2025-05-25 00:57:01.701938 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-25 00:57:01.701947 | orchestrator | Sunday 25 May 2025 00:56:36 +0000 (0:00:00.388) 0:00:27.342 ************ 2025-05-25 00:57:01.701957 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.701974 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.701984 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.701994 | orchestrator | 2025-05-25 00:57:01.702003 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-25 00:57:01.702013 | orchestrator | Sunday 25 May 2025 00:56:36 +0000 (0:00:00.315) 0:00:27.658 ************ 2025-05-25 00:57:01.702059 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:57:01.702072 | orchestrator | 2025-05-25 00:57:01.702082 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-25 00:57:01.702091 | orchestrator | Sunday 25 May 2025 00:56:37 +0000 (0:00:00.525) 0:00:28.184 ************ 2025-05-25 00:57:01.702114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 00:57:01.702133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 00:57:01.702159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 00:57:01.702170 | orchestrator | 2025-05-25 00:57:01.702180 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-25 00:57:01.702190 | orchestrator | Sunday 25 May 2025 00:56:39 +0000 (0:00:01.829) 0:00:30.013 ************ 2025-05-25 00:57:01.702206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 00:57:01.702222 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.702240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 00:57:01.702251 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.702267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 00:57:01.702283 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.702293 | orchestrator | 2025-05-25 00:57:01.702324 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-25 00:57:01.702334 | orchestrator | Sunday 25 May 2025 00:56:40 +0000 (0:00:01.250) 0:00:31.264 ************ 2025-05-25 00:57:01.702358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 00:57:01.702376 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.702386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 00:57:01.702397 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.702420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-25 00:57:01.702438 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.702447 | orchestrator | 2025-05-25 00:57:01.702457 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-25 00:57:01.702466 | orchestrator | Sunday 25 May 2025 00:56:41 +0000 (0:00:01.169) 0:00:32.434 ************ 2025-05-25 00:57:01.702483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 00:57:01.702500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 00:57:01.702523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-25 00:57:01.702534 | orchestrator | 2025-05-25 00:57:01.702544 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-25 00:57:01.702553 | orchestrator | Sunday 25 May 2025 00:56:45 +0000 (0:00:04.096) 0:00:36.530 ************ 2025-05-25 00:57:01.702574 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:01.702583 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:01.702593 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:01.702602 | orchestrator | 2025-05-25 00:57:01.702612 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-25 00:57:01.702621 | orchestrator | Sunday 25 May 2025 00:56:46 +0000 (0:00:00.303) 0:00:36.833 ************ 2025-05-25 00:57:01.702631 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:57:01.702640 | orchestrator | 2025-05-25 00:57:01.702650 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-25 00:57:01.702660 | orchestrator | Sunday 25 May 2025 00:56:46 +0000 (0:00:00.506) 0:00:37.340 ************ 2025-05-25 00:57:01.702692 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: pymysql.err.OperationalError: (9001, 'Max connect timeout reached while reaching hostgroup 0 after 10000ms') 2025-05-25 00:57:01.702712 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "mysql_db", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1748134608.1126175-2098-49892317413007/AnsiballZ_mysql_db.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1748134608.1126175-2098-49892317413007/AnsiballZ_mysql_db.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1748134608.1126175-2098-49892317413007/AnsiballZ_mysql_db.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.community.mysql.plugins.modules.mysql_db', init_globals=dict(_module_fqn='ansible_collections.community.mysql.plugins.modules.mysql_db', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_mysql_db_payload_bvetb5y0/ansible_mysql_db_payload.zip/ansible_collections/community/mysql/plugins/modules/mysql_db.py\", line 725, in \n File \"/tmp/ansible_mysql_db_payload_bvetb5y0/ansible_mysql_db_payload.zip/ansible_collections/community/mysql/plugins/modules/mysql_db.py\", line 662, in main\n File \"/tmp/ansible_mysql_db_payload_bvetb5y0/ansible_mysql_db_payload.zip/ansible_collections/community/mysql/plugins/modules/mysql_db.py\", line 337, in db_exists\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/cursors.py\", line 153, in execute\n result = self._query(query)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/cursors.py\", line 322, in _query\n conn.query(q)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 558, in query\n self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 822, in _read_query_result\n result.read()\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 1200, in read\n first_packet = self.connection._read_packet()\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 772, in _read_packet\n packet.raise_for_error()\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/protocol.py\", line 221, in raise_for_error\n err.raise_mysql_exception(self._data)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/err.py\", line 143, in raise_mysql_exception\n raise errorclass(errno, errval)\npymysql.err.OperationalError: (9001, 'Max connect timeout reached while reaching hostgroup 0 after 10000ms')\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-05-25 00:57:01.702724 | orchestrator | 2025-05-25 00:57:01.702740 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:57:01.702750 | orchestrator | testbed-node-0 : ok=35  changed=7  unreachable=0 failed=1  skipped=27  rescued=0 ignored=0 2025-05-25 00:57:01.702760 | orchestrator | testbed-node-1 : ok=35  changed=7  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-25 00:57:01.702770 | orchestrator | testbed-node-2 : ok=35  changed=7  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-25 00:57:01.702780 | orchestrator | 2025-05-25 00:57:01.702790 | orchestrator | 2025-05-25 00:57:01.702799 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:57:01.702808 | orchestrator | Sunday 25 May 2025 00:56:58 +0000 (0:00:12.324) 0:00:49.665 ************ 2025-05-25 00:57:01.702818 | orchestrator | =============================================================================== 2025-05-25 00:57:01.702828 | orchestrator | horizon : Creating Horizon database ------------------------------------ 12.32s 2025-05-25 00:57:01.702837 | orchestrator | horizon : Deploy horizon container -------------------------------------- 4.10s 2025-05-25 00:57:01.702847 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.25s 2025-05-25 00:57:01.702861 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.19s 2025-05-25 00:57:01.702870 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.10s 2025-05-25 00:57:01.702880 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.62s 2025-05-25 00:57:01.702889 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.83s 2025-05-25 00:57:01.702899 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.72s 2025-05-25 00:57:01.702908 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.25s 2025-05-25 00:57:01.702918 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.17s 2025-05-25 00:57:01.702927 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2025-05-25 00:57:01.702937 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-05-25 00:57:01.702946 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.66s 2025-05-25 00:57:01.702956 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2025-05-25 00:57:01.702965 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2025-05-25 00:57:01.702974 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.49s 2025-05-25 00:57:01.702984 | orchestrator | horizon : Update policy file name --------------------------------------- 0.47s 2025-05-25 00:57:01.702993 | orchestrator | horizon : Update policy file name --------------------------------------- 0.46s 2025-05-25 00:57:01.703002 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2025-05-25 00:57:01.703012 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.45s 2025-05-25 00:57:01.703021 | orchestrator | 2025-05-25 00:57:01 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:57:01.703031 | orchestrator | 2025-05-25 00:57:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:01.703041 | orchestrator | 2025-05-25 00:57:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:04.749781 | orchestrator | 2025-05-25 00:57:04 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:04.751235 | orchestrator | 2025-05-25 00:57:04 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:57:04.753336 | orchestrator | 2025-05-25 00:57:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:04.753402 | orchestrator | 2025-05-25 00:57:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:07.798727 | orchestrator | 2025-05-25 00:57:07 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:07.799906 | orchestrator | 2025-05-25 00:57:07 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:57:07.802836 | orchestrator | 2025-05-25 00:57:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:07.802925 | orchestrator | 2025-05-25 00:57:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:10.854188 | orchestrator | 2025-05-25 00:57:10 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:10.854934 | orchestrator | 2025-05-25 00:57:10 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state STARTED 2025-05-25 00:57:10.856750 | orchestrator | 2025-05-25 00:57:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:10.856783 | orchestrator | 2025-05-25 00:57:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:13.908022 | orchestrator | 2025-05-25 00:57:13 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:13.908471 | orchestrator | 2025-05-25 00:57:13 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:13.912003 | orchestrator | 2025-05-25 00:57:13 | INFO  | Task abb6cd2e-2aa2-476b-99a8-15c58d8993dc is in state SUCCESS 2025-05-25 00:57:13.912457 | orchestrator | 2025-05-25 00:57:13.913913 | orchestrator | 2025-05-25 00:57:13.914346 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:57:13.914366 | orchestrator | 2025-05-25 00:57:13.914377 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:57:13.914390 | orchestrator | Sunday 25 May 2025 00:56:09 +0000 (0:00:00.306) 0:00:00.306 ************ 2025-05-25 00:57:13.914401 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:13.914413 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:13.914424 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:13.914435 | orchestrator | 2025-05-25 00:57:13.914446 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:57:13.914458 | orchestrator | Sunday 25 May 2025 00:56:10 +0000 (0:00:00.470) 0:00:00.777 ************ 2025-05-25 00:57:13.914469 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-25 00:57:13.914480 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-25 00:57:13.914507 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-25 00:57:13.914519 | orchestrator | 2025-05-25 00:57:13.914530 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-25 00:57:13.914541 | orchestrator | 2025-05-25 00:57:13.914552 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-25 00:57:13.914563 | orchestrator | Sunday 25 May 2025 00:56:10 +0000 (0:00:00.343) 0:00:01.120 ************ 2025-05-25 00:57:13.914574 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:57:13.914585 | orchestrator | 2025-05-25 00:57:13.914596 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-25 00:57:13.914607 | orchestrator | Sunday 25 May 2025 00:56:11 +0000 (0:00:00.832) 0:00:01.953 ************ 2025-05-25 00:57:13.914625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.914666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.914748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.914771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 00:57:13.914785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 00:57:13.914797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 00:57:13.914817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.914829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.914841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.914852 | orchestrator | 2025-05-25 00:57:13.914864 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-25 00:57:13.914881 | orchestrator | Sunday 25 May 2025 00:56:13 +0000 (0:00:02.241) 0:00:04.195 ************ 2025-05-25 00:57:13.914893 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-25 00:57:13.914905 | orchestrator | 2025-05-25 00:57:13.914916 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-25 00:57:13.914927 | orchestrator | Sunday 25 May 2025 00:56:13 +0000 (0:00:00.474) 0:00:04.670 ************ 2025-05-25 00:57:13.914938 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:13.914951 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:13.914964 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:13.914976 | orchestrator | 2025-05-25 00:57:13.914989 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-25 00:57:13.915001 | orchestrator | Sunday 25 May 2025 00:56:14 +0000 (0:00:00.310) 0:00:04.980 ************ 2025-05-25 00:57:13.915014 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 00:57:13.915027 | orchestrator | 2025-05-25 00:57:13.915045 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-25 00:57:13.915058 | orchestrator | Sunday 25 May 2025 00:56:14 +0000 (0:00:00.339) 0:00:05.319 ************ 2025-05-25 00:57:13.915071 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:57:13.915090 | orchestrator | 2025-05-25 00:57:13.915103 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-25 00:57:13.915116 | orchestrator | Sunday 25 May 2025 00:56:15 +0000 (0:00:00.489) 0:00:05.809 ************ 2025-05-25 00:57:13.915130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.915145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.915168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.915188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 00:57:13.915210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 00:57:13.915223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 00:57:13.915236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.915249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.915263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.915275 | orchestrator | 2025-05-25 00:57:13.915287 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-25 00:57:13.915300 | orchestrator | Sunday 25 May 2025 00:56:18 +0000 (0:00:03.005) 0:00:08.815 ************ 2025-05-25 00:57:13.915324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-25 00:57:13.915343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:57:13.915355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 00:57:13.915366 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:13.915378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-25 00:57:13.915390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:57:13.915410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 00:57:13.915427 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:13.915444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-25 00:57:13.915456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:57:13.915468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 00:57:13.915479 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:13.915490 | orchestrator | 2025-05-25 00:57:13.915501 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-25 00:57:13.915512 | orchestrator | Sunday 25 May 2025 00:56:18 +0000 (0:00:00.874) 0:00:09.689 ************ 2025-05-25 00:57:13.915524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-25 00:57:13.915543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:57:13.915566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 00:57:13.915578 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:13.915589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-25 00:57:13.915601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:57:13.915613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 00:57:13.915625 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:13.915645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-25 00:57:13.915674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:57:13.915724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-25 00:57:13.915737 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:13.915748 | orchestrator | 2025-05-25 00:57:13.915759 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-25 00:57:13.915770 | orchestrator | Sunday 25 May 2025 00:56:20 +0000 (0:00:01.072) 0:00:10.762 ************ 2025-05-25 00:57:13.915782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.915795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.915827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.915840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 00:57:13.915852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 00:57:13.915863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 00:57:13.915875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.915886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.915909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.915921 | orchestrator | 2025-05-25 00:57:13.915932 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-25 00:57:13.915948 | orchestrator | Sunday 25 May 2025 00:56:23 +0000 (0:00:03.368) 0:00:14.130 ************ 2025-05-25 00:57:13.915959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.915971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:57:13.915983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.916002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:57:13.916025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.916038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:57:13.916049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.916060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.916072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.916089 | orchestrator | 2025-05-25 00:57:13.916100 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-25 00:57:13.916111 | orchestrator | Sunday 25 May 2025 00:56:30 +0000 (0:00:07.594) 0:00:21.725 ************ 2025-05-25 00:57:13.916122 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:57:13.916133 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:57:13.916144 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:57:13.916155 | orchestrator | 2025-05-25 00:57:13.916166 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-25 00:57:13.916176 | orchestrator | Sunday 25 May 2025 00:56:33 +0000 (0:00:02.978) 0:00:24.703 ************ 2025-05-25 00:57:13.916187 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:13.916198 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:13.916209 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:13.916220 | orchestrator | 2025-05-25 00:57:13.916235 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-25 00:57:13.916246 | orchestrator | Sunday 25 May 2025 00:56:35 +0000 (0:00:01.438) 0:00:26.142 ************ 2025-05-25 00:57:13.916257 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:13.916268 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:13.916279 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:13.916289 | orchestrator | 2025-05-25 00:57:13.916300 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-25 00:57:13.916311 | orchestrator | Sunday 25 May 2025 00:56:35 +0000 (0:00:00.403) 0:00:26.545 ************ 2025-05-25 00:57:13.916322 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:13.916332 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:13.916343 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:13.916354 | orchestrator | 2025-05-25 00:57:13.916365 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-25 00:57:13.916375 | orchestrator | Sunday 25 May 2025 00:56:36 +0000 (0:00:00.330) 0:00:26.876 ************ 2025-05-25 00:57:13.916392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.916404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:57:13.916422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.916434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:57:13.916457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.916470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-25 00:57:13.916481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.916499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.916510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.916521 | orchestrator | 2025-05-25 00:57:13.916532 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-25 00:57:13.916544 | orchestrator | Sunday 25 May 2025 00:56:38 +0000 (0:00:02.289) 0:00:29.166 ************ 2025-05-25 00:57:13.916554 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:13.916565 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:13.916576 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:13.916587 | orchestrator | 2025-05-25 00:57:13.916598 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-25 00:57:13.916609 | orchestrator | Sunday 25 May 2025 00:56:38 +0000 (0:00:00.319) 0:00:29.486 ************ 2025-05-25 00:57:13.916620 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-25 00:57:13.916631 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-25 00:57:13.916647 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-25 00:57:13.916659 | orchestrator | 2025-05-25 00:57:13.916670 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-25 00:57:13.916680 | orchestrator | Sunday 25 May 2025 00:56:41 +0000 (0:00:02.449) 0:00:31.936 ************ 2025-05-25 00:57:13.916753 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 00:57:13.916764 | orchestrator | 2025-05-25 00:57:13.916775 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-25 00:57:13.916786 | orchestrator | Sunday 25 May 2025 00:56:41 +0000 (0:00:00.508) 0:00:32.444 ************ 2025-05-25 00:57:13.916797 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:13.916808 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:13.916819 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:13.916830 | orchestrator | 2025-05-25 00:57:13.916841 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-25 00:57:13.916856 | orchestrator | Sunday 25 May 2025 00:56:42 +0000 (0:00:00.974) 0:00:33.419 ************ 2025-05-25 00:57:13.916868 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-25 00:57:13.916879 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-25 00:57:13.916889 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 00:57:13.916900 | orchestrator | 2025-05-25 00:57:13.916911 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-25 00:57:13.916922 | orchestrator | Sunday 25 May 2025 00:56:43 +0000 (0:00:00.829) 0:00:34.248 ************ 2025-05-25 00:57:13.916933 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:57:13.916951 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:57:13.916963 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:57:13.916974 | orchestrator | 2025-05-25 00:57:13.916985 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-25 00:57:13.916995 | orchestrator | Sunday 25 May 2025 00:56:43 +0000 (0:00:00.377) 0:00:34.626 ************ 2025-05-25 00:57:13.917006 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-25 00:57:13.917017 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-25 00:57:13.917028 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-25 00:57:13.917039 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-25 00:57:13.917050 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-25 00:57:13.917061 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-25 00:57:13.917071 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-25 00:57:13.917082 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-25 00:57:13.917093 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-25 00:57:13.917104 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-25 00:57:13.917115 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-25 00:57:13.917124 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-25 00:57:13.917134 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-25 00:57:13.917143 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-25 00:57:13.917153 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-25 00:57:13.917163 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-25 00:57:13.917173 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-25 00:57:13.917182 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-25 00:57:13.917192 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-25 00:57:13.917202 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-25 00:57:13.917211 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-25 00:57:13.917221 | orchestrator | 2025-05-25 00:57:13.917230 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-25 00:57:13.917240 | orchestrator | Sunday 25 May 2025 00:56:54 +0000 (0:00:10.226) 0:00:44.853 ************ 2025-05-25 00:57:13.917250 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-25 00:57:13.917260 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-25 00:57:13.917269 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-25 00:57:13.917279 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-25 00:57:13.917289 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-25 00:57:13.917304 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-25 00:57:13.917314 | orchestrator | 2025-05-25 00:57:13.917333 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-25 00:57:13.917343 | orchestrator | Sunday 25 May 2025 00:56:57 +0000 (0:00:03.134) 0:00:47.987 ************ 2025-05-25 00:57:13.917357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.917369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.917380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-25 00:57:13.917392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 00:57:13.917409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 00:57:13.917430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-25 00:57:13.917441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.917451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.917461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-25 00:57:13.917471 | orchestrator | 2025-05-25 00:57:13.917481 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-25 00:57:13.917491 | orchestrator | Sunday 25 May 2025 00:57:00 +0000 (0:00:02.771) 0:00:50.759 ************ 2025-05-25 00:57:13.917500 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:57:13.917510 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:57:13.917520 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:57:13.917529 | orchestrator | 2025-05-25 00:57:13.917539 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-25 00:57:13.917549 | orchestrator | Sunday 25 May 2025 00:57:00 +0000 (0:00:00.278) 0:00:51.037 ************ 2025-05-25 00:57:13.917560 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: pymysql.err.OperationalError: (9001, 'Max connect timeout reached while reaching hostgroup 0 after 10000ms') 2025-05-25 00:57:13.917592 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "mysql_db", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1748134621.815452-2141-276674755486064/AnsiballZ_mysql_db.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1748134621.815452-2141-276674755486064/AnsiballZ_mysql_db.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1748134621.815452-2141-276674755486064/AnsiballZ_mysql_db.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.community.mysql.plugins.modules.mysql_db', init_globals=dict(_module_fqn='ansible_collections.community.mysql.plugins.modules.mysql_db', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_mysql_db_payload_peihdc4a/ansible_mysql_db_payload.zip/ansible_collections/community/mysql/plugins/modules/mysql_db.py\", line 725, in \n File \"/tmp/ansible_mysql_db_payload_peihdc4a/ansible_mysql_db_payload.zip/ansible_collections/community/mysql/plugins/modules/mysql_db.py\", line 662, in main\n File \"/tmp/ansible_mysql_db_payload_peihdc4a/ansible_mysql_db_payload.zip/ansible_collections/community/mysql/plugins/modules/mysql_db.py\", line 337, in db_exists\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/cursors.py\", line 153, in execute\n result = self._query(query)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/cursors.py\", line 322, in _query\n conn.query(q)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 558, in query\n self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 822, in _read_query_result\n result.read()\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 1200, in read\n first_packet = self.connection._read_packet()\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 772, in _read_packet\n packet.raise_for_error()\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/protocol.py\", line 221, in raise_for_error\n err.raise_mysql_exception(self._data)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/err.py\", line 143, in raise_mysql_exception\n raise errorclass(errno, errval)\npymysql.err.OperationalError: (9001, 'Max connect timeout reached while reaching hostgroup 0 after 10000ms')\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-05-25 00:57:13.917605 | orchestrator | 2025-05-25 00:57:13.917616 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:57:13.917626 | orchestrator | testbed-node-0 : ok=20  changed=10  unreachable=0 failed=1  skipped=8  rescued=0 ignored=0 2025-05-25 00:57:13.917636 | orchestrator | testbed-node-1 : ok=17  changed=10  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-25 00:57:13.917647 | orchestrator | testbed-node-2 : ok=17  changed=10  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-25 00:57:13.917657 | orchestrator | 2025-05-25 00:57:13.917666 | orchestrator | 2025-05-25 00:57:13.917676 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:57:13.917703 | orchestrator | Sunday 25 May 2025 00:57:12 +0000 (0:00:12.146) 0:01:03.184 ************ 2025-05-25 00:57:13.917713 | orchestrator | =============================================================================== 2025-05-25 00:57:13.917729 | orchestrator | keystone : Creating keystone database ---------------------------------- 12.15s 2025-05-25 00:57:13.917739 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.23s 2025-05-25 00:57:13.917748 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.59s 2025-05-25 00:57:13.917758 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.37s 2025-05-25 00:57:13.917767 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.13s 2025-05-25 00:57:13.917777 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.01s 2025-05-25 00:57:13.917786 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.98s 2025-05-25 00:57:13.917796 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.77s 2025-05-25 00:57:13.917805 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.45s 2025-05-25 00:57:13.917815 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.29s 2025-05-25 00:57:13.917830 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.24s 2025-05-25 00:57:13.917839 | orchestrator | keystone : Create Keystone domain-specific config directory ------------- 1.44s 2025-05-25 00:57:13.917849 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 1.07s 2025-05-25 00:57:13.917859 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 0.97s 2025-05-25 00:57:13.917868 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 0.87s 2025-05-25 00:57:13.917878 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.83s 2025-05-25 00:57:13.917888 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 0.83s 2025-05-25 00:57:13.917897 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 0.51s 2025-05-25 00:57:13.917911 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.49s 2025-05-25 00:57:13.917921 | orchestrator | keystone : Check if policies shall be overwritten ----------------------- 0.48s 2025-05-25 00:57:13.918721 | orchestrator | 2025-05-25 00:57:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:13.923535 | orchestrator | 2025-05-25 00:57:13 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:13.926439 | orchestrator | 2025-05-25 00:57:13 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:13.934996 | orchestrator | 2025-05-25 00:57:13 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:13.935056 | orchestrator | 2025-05-25 00:57:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:16.978372 | orchestrator | 2025-05-25 00:57:16 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:16.978659 | orchestrator | 2025-05-25 00:57:16 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:16.980594 | orchestrator | 2025-05-25 00:57:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:16.983875 | orchestrator | 2025-05-25 00:57:16 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:16.984308 | orchestrator | 2025-05-25 00:57:16 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:16.984948 | orchestrator | 2025-05-25 00:57:16 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:16.984972 | orchestrator | 2025-05-25 00:57:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:20.011655 | orchestrator | 2025-05-25 00:57:20 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:20.012674 | orchestrator | 2025-05-25 00:57:20 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:20.014265 | orchestrator | 2025-05-25 00:57:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:20.015407 | orchestrator | 2025-05-25 00:57:20 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:20.016725 | orchestrator | 2025-05-25 00:57:20 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:20.018109 | orchestrator | 2025-05-25 00:57:20 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:20.018164 | orchestrator | 2025-05-25 00:57:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:23.060993 | orchestrator | 2025-05-25 00:57:23 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:23.062325 | orchestrator | 2025-05-25 00:57:23 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:23.062752 | orchestrator | 2025-05-25 00:57:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:23.063685 | orchestrator | 2025-05-25 00:57:23 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:23.064843 | orchestrator | 2025-05-25 00:57:23 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:23.068462 | orchestrator | 2025-05-25 00:57:23 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:23.068511 | orchestrator | 2025-05-25 00:57:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:26.119074 | orchestrator | 2025-05-25 00:57:26 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:26.120086 | orchestrator | 2025-05-25 00:57:26 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:26.121569 | orchestrator | 2025-05-25 00:57:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:26.123676 | orchestrator | 2025-05-25 00:57:26 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:26.126180 | orchestrator | 2025-05-25 00:57:26 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:26.128160 | orchestrator | 2025-05-25 00:57:26 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:26.128199 | orchestrator | 2025-05-25 00:57:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:29.170586 | orchestrator | 2025-05-25 00:57:29 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:29.172789 | orchestrator | 2025-05-25 00:57:29 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:29.175927 | orchestrator | 2025-05-25 00:57:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:29.177278 | orchestrator | 2025-05-25 00:57:29 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:29.178524 | orchestrator | 2025-05-25 00:57:29 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:29.180356 | orchestrator | 2025-05-25 00:57:29 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:29.180400 | orchestrator | 2025-05-25 00:57:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:32.229777 | orchestrator | 2025-05-25 00:57:32 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:32.231687 | orchestrator | 2025-05-25 00:57:32 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:32.233739 | orchestrator | 2025-05-25 00:57:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:32.235163 | orchestrator | 2025-05-25 00:57:32 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:32.238946 | orchestrator | 2025-05-25 00:57:32 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:32.241917 | orchestrator | 2025-05-25 00:57:32 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:32.241948 | orchestrator | 2025-05-25 00:57:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:35.288940 | orchestrator | 2025-05-25 00:57:35 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:35.291102 | orchestrator | 2025-05-25 00:57:35 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:35.292960 | orchestrator | 2025-05-25 00:57:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:35.294461 | orchestrator | 2025-05-25 00:57:35 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:35.296065 | orchestrator | 2025-05-25 00:57:35 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:35.297627 | orchestrator | 2025-05-25 00:57:35 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:35.297703 | orchestrator | 2025-05-25 00:57:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:38.343075 | orchestrator | 2025-05-25 00:57:38 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:38.344229 | orchestrator | 2025-05-25 00:57:38 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:38.345946 | orchestrator | 2025-05-25 00:57:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:38.347496 | orchestrator | 2025-05-25 00:57:38 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:38.349328 | orchestrator | 2025-05-25 00:57:38 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:38.350707 | orchestrator | 2025-05-25 00:57:38 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:38.350815 | orchestrator | 2025-05-25 00:57:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:41.404011 | orchestrator | 2025-05-25 00:57:41 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:41.404550 | orchestrator | 2025-05-25 00:57:41 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:41.406073 | orchestrator | 2025-05-25 00:57:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:41.406881 | orchestrator | 2025-05-25 00:57:41 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:41.408458 | orchestrator | 2025-05-25 00:57:41 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:41.409067 | orchestrator | 2025-05-25 00:57:41 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:41.409258 | orchestrator | 2025-05-25 00:57:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:44.448568 | orchestrator | 2025-05-25 00:57:44 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:44.449433 | orchestrator | 2025-05-25 00:57:44 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:44.451084 | orchestrator | 2025-05-25 00:57:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:44.452756 | orchestrator | 2025-05-25 00:57:44 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:44.454604 | orchestrator | 2025-05-25 00:57:44 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:44.456121 | orchestrator | 2025-05-25 00:57:44 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:44.456178 | orchestrator | 2025-05-25 00:57:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:47.502913 | orchestrator | 2025-05-25 00:57:47 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:47.503920 | orchestrator | 2025-05-25 00:57:47 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:47.508775 | orchestrator | 2025-05-25 00:57:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:47.510163 | orchestrator | 2025-05-25 00:57:47 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:47.511702 | orchestrator | 2025-05-25 00:57:47 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:47.513007 | orchestrator | 2025-05-25 00:57:47 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:47.513066 | orchestrator | 2025-05-25 00:57:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:50.549330 | orchestrator | 2025-05-25 00:57:50 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:50.550181 | orchestrator | 2025-05-25 00:57:50 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:50.551300 | orchestrator | 2025-05-25 00:57:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:50.552639 | orchestrator | 2025-05-25 00:57:50 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:50.554008 | orchestrator | 2025-05-25 00:57:50 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:50.556178 | orchestrator | 2025-05-25 00:57:50 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:50.556218 | orchestrator | 2025-05-25 00:57:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:53.605593 | orchestrator | 2025-05-25 00:57:53 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:53.607836 | orchestrator | 2025-05-25 00:57:53 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:53.609369 | orchestrator | 2025-05-25 00:57:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:53.611223 | orchestrator | 2025-05-25 00:57:53 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:53.613381 | orchestrator | 2025-05-25 00:57:53 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:53.614825 | orchestrator | 2025-05-25 00:57:53 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:53.614854 | orchestrator | 2025-05-25 00:57:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:56.665579 | orchestrator | 2025-05-25 00:57:56 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:56.667032 | orchestrator | 2025-05-25 00:57:56 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:56.669061 | orchestrator | 2025-05-25 00:57:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:56.670664 | orchestrator | 2025-05-25 00:57:56 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:56.672997 | orchestrator | 2025-05-25 00:57:56 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:56.675337 | orchestrator | 2025-05-25 00:57:56 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:56.675374 | orchestrator | 2025-05-25 00:57:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:57:59.728375 | orchestrator | 2025-05-25 00:57:59 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:57:59.730147 | orchestrator | 2025-05-25 00:57:59 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:57:59.732054 | orchestrator | 2025-05-25 00:57:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:57:59.733294 | orchestrator | 2025-05-25 00:57:59 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:57:59.735912 | orchestrator | 2025-05-25 00:57:59 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:57:59.738359 | orchestrator | 2025-05-25 00:57:59 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:57:59.738389 | orchestrator | 2025-05-25 00:57:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:02.784979 | orchestrator | 2025-05-25 00:58:02 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:58:02.785828 | orchestrator | 2025-05-25 00:58:02 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:02.786909 | orchestrator | 2025-05-25 00:58:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:02.788349 | orchestrator | 2025-05-25 00:58:02 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:58:02.789586 | orchestrator | 2025-05-25 00:58:02 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:58:02.792283 | orchestrator | 2025-05-25 00:58:02 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:02.792310 | orchestrator | 2025-05-25 00:58:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:05.845973 | orchestrator | 2025-05-25 00:58:05 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:58:05.847581 | orchestrator | 2025-05-25 00:58:05 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:05.849229 | orchestrator | 2025-05-25 00:58:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:05.850474 | orchestrator | 2025-05-25 00:58:05 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:58:05.852320 | orchestrator | 2025-05-25 00:58:05 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:58:05.854062 | orchestrator | 2025-05-25 00:58:05 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:05.854095 | orchestrator | 2025-05-25 00:58:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:08.899995 | orchestrator | 2025-05-25 00:58:08 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:58:08.900170 | orchestrator | 2025-05-25 00:58:08 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:08.901033 | orchestrator | 2025-05-25 00:58:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:08.902316 | orchestrator | 2025-05-25 00:58:08 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:58:08.903320 | orchestrator | 2025-05-25 00:58:08 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:58:08.905623 | orchestrator | 2025-05-25 00:58:08 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:08.905866 | orchestrator | 2025-05-25 00:58:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:11.954319 | orchestrator | 2025-05-25 00:58:11 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:58:11.957531 | orchestrator | 2025-05-25 00:58:11 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:11.959338 | orchestrator | 2025-05-25 00:58:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:11.961125 | orchestrator | 2025-05-25 00:58:11 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:58:11.961881 | orchestrator | 2025-05-25 00:58:11 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:58:11.963405 | orchestrator | 2025-05-25 00:58:11 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:11.963449 | orchestrator | 2025-05-25 00:58:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:15.023657 | orchestrator | 2025-05-25 00:58:15 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:58:15.025506 | orchestrator | 2025-05-25 00:58:15 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:15.026927 | orchestrator | 2025-05-25 00:58:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:15.028597 | orchestrator | 2025-05-25 00:58:15 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:58:15.030271 | orchestrator | 2025-05-25 00:58:15 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:58:15.033276 | orchestrator | 2025-05-25 00:58:15 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:15.033541 | orchestrator | 2025-05-25 00:58:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:18.077900 | orchestrator | 2025-05-25 00:58:18 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:58:18.080964 | orchestrator | 2025-05-25 00:58:18 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:18.081532 | orchestrator | 2025-05-25 00:58:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:18.082314 | orchestrator | 2025-05-25 00:58:18 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:58:18.083178 | orchestrator | 2025-05-25 00:58:18 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:58:18.085713 | orchestrator | 2025-05-25 00:58:18 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:18.085926 | orchestrator | 2025-05-25 00:58:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:21.130141 | orchestrator | 2025-05-25 00:58:21 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state STARTED 2025-05-25 00:58:21.132490 | orchestrator | 2025-05-25 00:58:21 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:21.134483 | orchestrator | 2025-05-25 00:58:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:21.136004 | orchestrator | 2025-05-25 00:58:21 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:58:21.137862 | orchestrator | 2025-05-25 00:58:21 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state STARTED 2025-05-25 00:58:21.138971 | orchestrator | 2025-05-25 00:58:21 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:21.139066 | orchestrator | 2025-05-25 00:58:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:24.186853 | orchestrator | 2025-05-25 00:58:24 | INFO  | Task f52743c4-6d12-47a3-a9e7-d34aa56521b1 is in state SUCCESS 2025-05-25 00:58:24.187197 | orchestrator | 2025-05-25 00:58:24.187224 | orchestrator | 2025-05-25 00:58:24.187236 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:58:24.187248 | orchestrator | 2025-05-25 00:58:24.187260 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:58:24.187274 | orchestrator | Sunday 25 May 2025 00:57:16 +0000 (0:00:00.253) 0:00:00.253 ************ 2025-05-25 00:58:24.187293 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:58:24.187311 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:58:24.187329 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:58:24.187349 | orchestrator | 2025-05-25 00:58:24.187368 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:58:24.187386 | orchestrator | Sunday 25 May 2025 00:57:17 +0000 (0:00:00.327) 0:00:00.580 ************ 2025-05-25 00:58:24.187405 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-25 00:58:24.187416 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-25 00:58:24.187427 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-25 00:58:24.187438 | orchestrator | 2025-05-25 00:58:24.187449 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-25 00:58:24.187459 | orchestrator | 2025-05-25 00:58:24.187470 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-25 00:58:24.187481 | orchestrator | Sunday 25 May 2025 00:57:17 +0000 (0:00:00.402) 0:00:00.982 ************ 2025-05-25 00:58:24.187491 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:58:24.187503 | orchestrator | 2025-05-25 00:58:24.187513 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-25 00:58:24.187524 | orchestrator | Sunday 25 May 2025 00:57:18 +0000 (0:00:00.942) 0:00:01.925 ************ 2025-05-25 00:58:24.187535 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (5 retries left). 2025-05-25 00:58:24.187545 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (4 retries left). 2025-05-25 00:58:24.187556 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (3 retries left). 2025-05-25 00:58:24.187566 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (2 retries left). 2025-05-25 00:58:24.187577 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (1 retries left). 2025-05-25 00:58:24.187631 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1748134701.5013876-2692-153420445853192/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1748134701.5013876-2692-153420445853192/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1748134701.5013876-2692-153420445853192/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_j_2rccr7/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_j_2rccr7/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_j_2rccr7/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 415, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_j_2rccr7/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_j_2rccr7/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-05-25 00:58:24.187675 | orchestrator | 2025-05-25 00:58:24.187687 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:58:24.187699 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-25 00:58:24.187711 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:58:24.187724 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:58:24.187735 | orchestrator | 2025-05-25 00:58:24.187746 | orchestrator | 2025-05-25 00:58:24.187756 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:58:24.187767 | orchestrator | Sunday 25 May 2025 00:58:22 +0000 (0:01:03.995) 0:01:05.920 ************ 2025-05-25 00:58:24.187785 | orchestrator | =============================================================================== 2025-05-25 00:58:24.187938 | orchestrator | service-ks-register : designate | Creating services -------------------- 64.00s 2025-05-25 00:58:24.187957 | orchestrator | designate : include_tasks ----------------------------------------------- 0.94s 2025-05-25 00:58:24.187970 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2025-05-25 00:58:24.187982 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-05-25 00:58:24.187995 | orchestrator | 2025-05-25 00:58:24 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:24.188008 | orchestrator | 2025-05-25 00:58:24 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:58:24.188026 | orchestrator | 2025-05-25 00:58:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:24.188779 | orchestrator | 2025-05-25 00:58:24 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:58:24.189834 | orchestrator | 2025-05-25 00:58:24.189861 | orchestrator | 2025-05-25 00:58:24.189872 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:58:24.189883 | orchestrator | 2025-05-25 00:58:24.189894 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:58:24.189905 | orchestrator | Sunday 25 May 2025 00:57:17 +0000 (0:00:00.350) 0:00:00.350 ************ 2025-05-25 00:58:24.189916 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:58:24.189927 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:58:24.189937 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:58:24.189948 | orchestrator | 2025-05-25 00:58:24.189959 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:58:24.189970 | orchestrator | Sunday 25 May 2025 00:57:18 +0000 (0:00:00.488) 0:00:00.839 ************ 2025-05-25 00:58:24.189994 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-25 00:58:24.190005 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-25 00:58:24.190064 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-25 00:58:24.190079 | orchestrator | 2025-05-25 00:58:24.190090 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-25 00:58:24.190101 | orchestrator | 2025-05-25 00:58:24.190120 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-25 00:58:24.190131 | orchestrator | Sunday 25 May 2025 00:57:18 +0000 (0:00:00.309) 0:00:01.149 ************ 2025-05-25 00:58:24.190142 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:58:24.190153 | orchestrator | 2025-05-25 00:58:24.190309 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-25 00:58:24.190321 | orchestrator | Sunday 25 May 2025 00:57:19 +0000 (0:00:00.628) 0:00:01.777 ************ 2025-05-25 00:58:24.190332 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (5 retries left). 2025-05-25 00:58:24.190343 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (4 retries left). 2025-05-25 00:58:24.190354 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (3 retries left). 2025-05-25 00:58:24.190365 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (2 retries left). 2025-05-25 00:58:24.190375 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (1 retries left). 2025-05-25 00:58:24.190421 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1748134701.8275194-2710-185369653254919/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1748134701.8275194-2710-185369653254919/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1748134701.8275194-2710-185369653254919/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_js503kha/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_js503kha/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_js503kha/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 415, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_js503kha/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_js503kha/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-05-25 00:58:24.190462 | orchestrator | 2025-05-25 00:58:24.190482 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:58:24.190501 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-25 00:58:24.190514 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:58:24.190525 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:58:24.190536 | orchestrator | 2025-05-25 00:58:24.190554 | orchestrator | 2025-05-25 00:58:24.190564 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:58:24.190575 | orchestrator | Sunday 25 May 2025 00:58:22 +0000 (0:01:03.738) 0:01:05.516 ************ 2025-05-25 00:58:24.190592 | orchestrator | =============================================================================== 2025-05-25 00:58:24.190604 | orchestrator | service-ks-register : barbican | Creating services --------------------- 63.74s 2025-05-25 00:58:24.190614 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.63s 2025-05-25 00:58:24.190625 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2025-05-25 00:58:24.190636 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.31s 2025-05-25 00:58:24.190646 | orchestrator | 2025-05-25 00:58:24 | INFO  | Task 4b33b6d7-de88-45c6-871e-247e52b201a3 is in state SUCCESS 2025-05-25 00:58:24.190657 | orchestrator | 2025-05-25 00:58:24 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:24.190668 | orchestrator | 2025-05-25 00:58:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:27.236152 | orchestrator | 2025-05-25 00:58:27 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:58:27.238636 | orchestrator | 2025-05-25 00:58:27 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:27.240342 | orchestrator | 2025-05-25 00:58:27 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:58:27.240778 | orchestrator | 2025-05-25 00:58:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:27.245681 | orchestrator | 2025-05-25 00:58:27 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state STARTED 2025-05-25 00:58:27.245720 | orchestrator | 2025-05-25 00:58:27 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:27.245733 | orchestrator | 2025-05-25 00:58:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:30.285205 | orchestrator | 2025-05-25 00:58:30 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:58:30.286797 | orchestrator | 2025-05-25 00:58:30 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:30.290637 | orchestrator | 2025-05-25 00:58:30 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:58:30.290676 | orchestrator | 2025-05-25 00:58:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:30.290689 | orchestrator | 2025-05-25 00:58:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:58:30.291528 | orchestrator | 2025-05-25 00:58:30 | INFO  | Task 64fcbb76-3a56-44e7-87fa-f8fa60a506bc is in state SUCCESS 2025-05-25 00:58:30.292320 | orchestrator | 2025-05-25 00:58:30.292347 | orchestrator | 2025-05-25 00:58:30.292360 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:58:30.292371 | orchestrator | 2025-05-25 00:58:30.292382 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:58:30.292394 | orchestrator | Sunday 25 May 2025 00:57:17 +0000 (0:00:00.324) 0:00:00.324 ************ 2025-05-25 00:58:30.292405 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:58:30.292416 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:58:30.292428 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:58:30.292439 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:58:30.292450 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:58:30.292461 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:58:30.292471 | orchestrator | 2025-05-25 00:58:30.292482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:58:30.292493 | orchestrator | Sunday 25 May 2025 00:57:18 +0000 (0:00:00.879) 0:00:01.204 ************ 2025-05-25 00:58:30.292528 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-25 00:58:30.292541 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-25 00:58:30.292551 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-25 00:58:30.292562 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-25 00:58:30.292573 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-25 00:58:30.292583 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-25 00:58:30.292594 | orchestrator | 2025-05-25 00:58:30.292605 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-25 00:58:30.292616 | orchestrator | 2025-05-25 00:58:30.292626 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-25 00:58:30.292637 | orchestrator | Sunday 25 May 2025 00:57:19 +0000 (0:00:00.736) 0:00:01.940 ************ 2025-05-25 00:58:30.292649 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:58:30.292662 | orchestrator | 2025-05-25 00:58:30.292673 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-25 00:58:30.292683 | orchestrator | Sunday 25 May 2025 00:57:20 +0000 (0:00:01.132) 0:00:03.072 ************ 2025-05-25 00:58:30.292694 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:58:30.292705 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:58:30.292716 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:58:30.292727 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:58:30.292738 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:58:30.292749 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:58:30.292760 | orchestrator | 2025-05-25 00:58:30.292770 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-25 00:58:30.292781 | orchestrator | Sunday 25 May 2025 00:57:21 +0000 (0:00:01.106) 0:00:04.179 ************ 2025-05-25 00:58:30.292792 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:58:30.292828 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:58:30.292840 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:58:30.292850 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:58:30.292861 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:58:30.292872 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:58:30.292883 | orchestrator | 2025-05-25 00:58:30.292894 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-25 00:58:30.292904 | orchestrator | Sunday 25 May 2025 00:57:22 +0000 (0:00:01.025) 0:00:05.205 ************ 2025-05-25 00:58:30.292916 | orchestrator | ok: [testbed-node-0] => { 2025-05-25 00:58:30.292928 | orchestrator |  "changed": false, 2025-05-25 00:58:30.292941 | orchestrator |  "msg": "All assertions passed" 2025-05-25 00:58:30.292954 | orchestrator | } 2025-05-25 00:58:30.292967 | orchestrator | ok: [testbed-node-1] => { 2025-05-25 00:58:30.292979 | orchestrator |  "changed": false, 2025-05-25 00:58:30.292991 | orchestrator |  "msg": "All assertions passed" 2025-05-25 00:58:30.293004 | orchestrator | } 2025-05-25 00:58:30.293017 | orchestrator | ok: [testbed-node-2] => { 2025-05-25 00:58:30.293029 | orchestrator |  "changed": false, 2025-05-25 00:58:30.293042 | orchestrator |  "msg": "All assertions passed" 2025-05-25 00:58:30.293055 | orchestrator | } 2025-05-25 00:58:30.293067 | orchestrator | ok: [testbed-node-3] => { 2025-05-25 00:58:30.293095 | orchestrator |  "changed": false, 2025-05-25 00:58:30.293108 | orchestrator |  "msg": "All assertions passed" 2025-05-25 00:58:30.293120 | orchestrator | } 2025-05-25 00:58:30.293133 | orchestrator | ok: [testbed-node-4] => { 2025-05-25 00:58:30.293146 | orchestrator |  "changed": false, 2025-05-25 00:58:30.293158 | orchestrator |  "msg": "All assertions passed" 2025-05-25 00:58:30.293171 | orchestrator | } 2025-05-25 00:58:30.293184 | orchestrator | ok: [testbed-node-5] => { 2025-05-25 00:58:30.293196 | orchestrator |  "changed": false, 2025-05-25 00:58:30.293209 | orchestrator |  "msg": "All assertions passed" 2025-05-25 00:58:30.293230 | orchestrator | } 2025-05-25 00:58:30.293243 | orchestrator | 2025-05-25 00:58:30.293357 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-25 00:58:30.293370 | orchestrator | Sunday 25 May 2025 00:57:22 +0000 (0:00:00.547) 0:00:05.753 ************ 2025-05-25 00:58:30.293381 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:58:30.293392 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:58:30.293403 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:58:30.293414 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:58:30.293424 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:58:30.293435 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:58:30.293446 | orchestrator | 2025-05-25 00:58:30.293457 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-25 00:58:30.293468 | orchestrator | Sunday 25 May 2025 00:57:23 +0000 (0:00:00.625) 0:00:06.378 ************ 2025-05-25 00:58:30.293478 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (5 retries left). 2025-05-25 00:58:30.293490 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (4 retries left). 2025-05-25 00:58:30.293501 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (3 retries left). 2025-05-25 00:58:30.293511 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (2 retries left). 2025-05-25 00:58:30.293523 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (1 retries left). 2025-05-25 00:58:30.293577 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1748134705.9949837-2747-201340762032367/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1748134705.9949837-2747-201340762032367/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1748134705.9949837-2747-201340762032367/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_ftat7ybr/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_ftat7ybr/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_ftat7ybr/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 415, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_ftat7ybr/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_ftat7ybr/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-05-25 00:58:30.293604 | orchestrator | 2025-05-25 00:58:30.293615 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:58:30.293628 | orchestrator | testbed-node-0 : ok=6  changed=0 unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2025-05-25 00:58:30.293640 | orchestrator | testbed-node-1 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:58:30.293651 | orchestrator | testbed-node-2 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:58:30.293662 | orchestrator | testbed-node-3 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:58:30.293680 | orchestrator | testbed-node-4 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:58:30.293691 | orchestrator | testbed-node-5 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 00:58:30.293702 | orchestrator | 2025-05-25 00:58:30.293713 | orchestrator | 2025-05-25 00:58:30.293729 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:58:30.293740 | orchestrator | Sunday 25 May 2025 00:58:26 +0000 (0:01:03.398) 0:01:09.777 ************ 2025-05-25 00:58:30.293752 | orchestrator | =============================================================================== 2025-05-25 00:58:30.293762 | orchestrator | service-ks-register : neutron | Creating services ---------------------- 63.40s 2025-05-25 00:58:30.293773 | orchestrator | neutron : include_tasks ------------------------------------------------- 1.13s 2025-05-25 00:58:30.293784 | orchestrator | neutron : Get container facts ------------------------------------------- 1.11s 2025-05-25 00:58:30.293795 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.03s 2025-05-25 00:58:30.293827 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.88s 2025-05-25 00:58:30.293839 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-05-25 00:58:30.293849 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.63s 2025-05-25 00:58:30.293860 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.55s 2025-05-25 00:58:30.293871 | orchestrator | 2025-05-25 00:58:30 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:30.293882 | orchestrator | 2025-05-25 00:58:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:33.333248 | orchestrator | 2025-05-25 00:58:33 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:58:33.334594 | orchestrator | 2025-05-25 00:58:33 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:33.335188 | orchestrator | 2025-05-25 00:58:33 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:58:33.336329 | orchestrator | 2025-05-25 00:58:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:33.337167 | orchestrator | 2025-05-25 00:58:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:58:33.338371 | orchestrator | 2025-05-25 00:58:33 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:33.338500 | orchestrator | 2025-05-25 00:58:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:36.382541 | orchestrator | 2025-05-25 00:58:36 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:58:36.383357 | orchestrator | 2025-05-25 00:58:36 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:36.384454 | orchestrator | 2025-05-25 00:58:36 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:58:36.386131 | orchestrator | 2025-05-25 00:58:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:36.386955 | orchestrator | 2025-05-25 00:58:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:58:36.388418 | orchestrator | 2025-05-25 00:58:36 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:36.388598 | orchestrator | 2025-05-25 00:58:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:39.434137 | orchestrator | 2025-05-25 00:58:39 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:58:39.435334 | orchestrator | 2025-05-25 00:58:39 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:39.439967 | orchestrator | 2025-05-25 00:58:39 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:58:39.442591 | orchestrator | 2025-05-25 00:58:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:39.445061 | orchestrator | 2025-05-25 00:58:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:58:39.446877 | orchestrator | 2025-05-25 00:58:39 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:39.446917 | orchestrator | 2025-05-25 00:58:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:42.486489 | orchestrator | 2025-05-25 00:58:42 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:58:42.487572 | orchestrator | 2025-05-25 00:58:42 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:42.489185 | orchestrator | 2025-05-25 00:58:42 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:58:42.491713 | orchestrator | 2025-05-25 00:58:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:42.495555 | orchestrator | 2025-05-25 00:58:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:58:42.496977 | orchestrator | 2025-05-25 00:58:42 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:42.497013 | orchestrator | 2025-05-25 00:58:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:45.549627 | orchestrator | 2025-05-25 00:58:45 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:58:45.551297 | orchestrator | 2025-05-25 00:58:45 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:45.555191 | orchestrator | 2025-05-25 00:58:45 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:58:45.556808 | orchestrator | 2025-05-25 00:58:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:45.558602 | orchestrator | 2025-05-25 00:58:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:58:45.560225 | orchestrator | 2025-05-25 00:58:45 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:45.560330 | orchestrator | 2025-05-25 00:58:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:48.616582 | orchestrator | 2025-05-25 00:58:48 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:58:48.622187 | orchestrator | 2025-05-25 00:58:48 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:48.623908 | orchestrator | 2025-05-25 00:58:48 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:58:48.625349 | orchestrator | 2025-05-25 00:58:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:48.626423 | orchestrator | 2025-05-25 00:58:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:58:48.627961 | orchestrator | 2025-05-25 00:58:48 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:48.627984 | orchestrator | 2025-05-25 00:58:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:51.670555 | orchestrator | 2025-05-25 00:58:51 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:58:51.670795 | orchestrator | 2025-05-25 00:58:51 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:51.671664 | orchestrator | 2025-05-25 00:58:51 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:58:51.672380 | orchestrator | 2025-05-25 00:58:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:51.673379 | orchestrator | 2025-05-25 00:58:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:58:51.674244 | orchestrator | 2025-05-25 00:58:51 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:51.674273 | orchestrator | 2025-05-25 00:58:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:54.725174 | orchestrator | 2025-05-25 00:58:54 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:58:54.727164 | orchestrator | 2025-05-25 00:58:54 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:54.729617 | orchestrator | 2025-05-25 00:58:54 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:58:54.731913 | orchestrator | 2025-05-25 00:58:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:54.733644 | orchestrator | 2025-05-25 00:58:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:58:54.736318 | orchestrator | 2025-05-25 00:58:54 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:54.736377 | orchestrator | 2025-05-25 00:58:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:58:57.783110 | orchestrator | 2025-05-25 00:58:57 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:58:57.785329 | orchestrator | 2025-05-25 00:58:57 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:58:57.787833 | orchestrator | 2025-05-25 00:58:57 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:58:57.789376 | orchestrator | 2025-05-25 00:58:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:58:57.791249 | orchestrator | 2025-05-25 00:58:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:58:57.794347 | orchestrator | 2025-05-25 00:58:57 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:58:57.794364 | orchestrator | 2025-05-25 00:58:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:00.846466 | orchestrator | 2025-05-25 00:59:00 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:59:00.847498 | orchestrator | 2025-05-25 00:59:00 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:59:00.848844 | orchestrator | 2025-05-25 00:59:00 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:59:00.852297 | orchestrator | 2025-05-25 00:59:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:00.854100 | orchestrator | 2025-05-25 00:59:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:00.855456 | orchestrator | 2025-05-25 00:59:00 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:00.855482 | orchestrator | 2025-05-25 00:59:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:03.904437 | orchestrator | 2025-05-25 00:59:03 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:59:03.905086 | orchestrator | 2025-05-25 00:59:03 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:59:03.907152 | orchestrator | 2025-05-25 00:59:03 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:59:03.909433 | orchestrator | 2025-05-25 00:59:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:03.913587 | orchestrator | 2025-05-25 00:59:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:03.915057 | orchestrator | 2025-05-25 00:59:03 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:03.915111 | orchestrator | 2025-05-25 00:59:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:06.967794 | orchestrator | 2025-05-25 00:59:06 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:59:06.969745 | orchestrator | 2025-05-25 00:59:06 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:59:06.973610 | orchestrator | 2025-05-25 00:59:06 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:59:06.975242 | orchestrator | 2025-05-25 00:59:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:06.976600 | orchestrator | 2025-05-25 00:59:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:06.977487 | orchestrator | 2025-05-25 00:59:06 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:06.977636 | orchestrator | 2025-05-25 00:59:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:10.027979 | orchestrator | 2025-05-25 00:59:10 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:59:10.029756 | orchestrator | 2025-05-25 00:59:10 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:59:10.033107 | orchestrator | 2025-05-25 00:59:10 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:59:10.034538 | orchestrator | 2025-05-25 00:59:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:10.036061 | orchestrator | 2025-05-25 00:59:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:10.037137 | orchestrator | 2025-05-25 00:59:10 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:10.037182 | orchestrator | 2025-05-25 00:59:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:13.078957 | orchestrator | 2025-05-25 00:59:13 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:59:13.080898 | orchestrator | 2025-05-25 00:59:13 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:59:13.082462 | orchestrator | 2025-05-25 00:59:13 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:59:13.084112 | orchestrator | 2025-05-25 00:59:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:13.086779 | orchestrator | 2025-05-25 00:59:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:13.088149 | orchestrator | 2025-05-25 00:59:13 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:13.088176 | orchestrator | 2025-05-25 00:59:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:16.136747 | orchestrator | 2025-05-25 00:59:16 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:59:16.138733 | orchestrator | 2025-05-25 00:59:16 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:59:16.143737 | orchestrator | 2025-05-25 00:59:16 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:59:16.145140 | orchestrator | 2025-05-25 00:59:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:16.146592 | orchestrator | 2025-05-25 00:59:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:16.147788 | orchestrator | 2025-05-25 00:59:16 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:16.147822 | orchestrator | 2025-05-25 00:59:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:19.208143 | orchestrator | 2025-05-25 00:59:19 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:59:19.209983 | orchestrator | 2025-05-25 00:59:19 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:59:19.212248 | orchestrator | 2025-05-25 00:59:19 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:59:19.213492 | orchestrator | 2025-05-25 00:59:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:19.215498 | orchestrator | 2025-05-25 00:59:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:19.216905 | orchestrator | 2025-05-25 00:59:19 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:19.217167 | orchestrator | 2025-05-25 00:59:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:22.273336 | orchestrator | 2025-05-25 00:59:22 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:59:22.273438 | orchestrator | 2025-05-25 00:59:22 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:59:22.274007 | orchestrator | 2025-05-25 00:59:22 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:59:22.275711 | orchestrator | 2025-05-25 00:59:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:22.276632 | orchestrator | 2025-05-25 00:59:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:22.278680 | orchestrator | 2025-05-25 00:59:22 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:22.278712 | orchestrator | 2025-05-25 00:59:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:25.325917 | orchestrator | 2025-05-25 00:59:25 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:59:25.327498 | orchestrator | 2025-05-25 00:59:25 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state STARTED 2025-05-25 00:59:25.329212 | orchestrator | 2025-05-25 00:59:25 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:59:25.330568 | orchestrator | 2025-05-25 00:59:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:25.331606 | orchestrator | 2025-05-25 00:59:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:25.333192 | orchestrator | 2025-05-25 00:59:25 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:25.333279 | orchestrator | 2025-05-25 00:59:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:28.380412 | orchestrator | 2025-05-25 00:59:28 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:59:28.390735 | orchestrator | 2025-05-25 00:59:28.390874 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-25 00:59:28.390993 | orchestrator | 2025-05-25 00:59:28.391050 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-25 00:59:28.391063 | orchestrator | 2025-05-25 00:59:28.391074 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-25 00:59:28.391085 | orchestrator | Sunday 25 May 2025 00:46:41 +0000 (0:00:01.797) 0:00:01.797 ************ 2025-05-25 00:59:28.391165 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.391182 | orchestrator | 2025-05-25 00:59:28.391193 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-25 00:59:28.391204 | orchestrator | Sunday 25 May 2025 00:46:42 +0000 (0:00:01.232) 0:00:03.030 ************ 2025-05-25 00:59:28.391216 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 00:59:28.391227 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-25 00:59:28.391237 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-25 00:59:28.391248 | orchestrator | 2025-05-25 00:59:28.391259 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-25 00:59:28.391399 | orchestrator | Sunday 25 May 2025 00:46:43 +0000 (0:00:00.633) 0:00:03.663 ************ 2025-05-25 00:59:28.391412 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.391423 | orchestrator | 2025-05-25 00:59:28.391434 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-25 00:59:28.391446 | orchestrator | Sunday 25 May 2025 00:46:44 +0000 (0:00:01.026) 0:00:04.689 ************ 2025-05-25 00:59:28.391457 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.391468 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.391479 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.391490 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.391500 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.391511 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.391521 | orchestrator | 2025-05-25 00:59:28.391532 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-25 00:59:28.391543 | orchestrator | Sunday 25 May 2025 00:46:45 +0000 (0:00:01.316) 0:00:06.006 ************ 2025-05-25 00:59:28.391587 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.391598 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.391609 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.391619 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.391630 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.391641 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.391686 | orchestrator | 2025-05-25 00:59:28.391698 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-25 00:59:28.391714 | orchestrator | Sunday 25 May 2025 00:46:46 +0000 (0:00:00.816) 0:00:06.822 ************ 2025-05-25 00:59:28.391732 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.391751 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.391770 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.391786 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.391804 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.391822 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.391838 | orchestrator | 2025-05-25 00:59:28.391857 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-25 00:59:28.391876 | orchestrator | Sunday 25 May 2025 00:46:47 +0000 (0:00:01.120) 0:00:07.943 ************ 2025-05-25 00:59:28.391954 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.392137 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.392166 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.392232 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.392247 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.392315 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.392328 | orchestrator | 2025-05-25 00:59:28.392339 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-25 00:59:28.392350 | orchestrator | Sunday 25 May 2025 00:46:48 +0000 (0:00:00.842) 0:00:08.786 ************ 2025-05-25 00:59:28.392361 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.392372 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.392383 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.392393 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.392404 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.392414 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.392425 | orchestrator | 2025-05-25 00:59:28.392436 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-25 00:59:28.392447 | orchestrator | Sunday 25 May 2025 00:46:48 +0000 (0:00:00.737) 0:00:09.523 ************ 2025-05-25 00:59:28.392457 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.392468 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.392479 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.392489 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.392500 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.392510 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.392521 | orchestrator | 2025-05-25 00:59:28.392532 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-25 00:59:28.392543 | orchestrator | Sunday 25 May 2025 00:46:50 +0000 (0:00:01.120) 0:00:10.644 ************ 2025-05-25 00:59:28.392554 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.392614 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.392626 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.392637 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.392647 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.392658 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.392668 | orchestrator | 2025-05-25 00:59:28.392679 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-25 00:59:28.392711 | orchestrator | Sunday 25 May 2025 00:46:50 +0000 (0:00:00.902) 0:00:11.547 ************ 2025-05-25 00:59:28.392754 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.392766 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.392777 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.392787 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.392933 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.392984 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.392996 | orchestrator | 2025-05-25 00:59:28.393026 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-25 00:59:28.393038 | orchestrator | Sunday 25 May 2025 00:46:52 +0000 (0:00:01.003) 0:00:12.551 ************ 2025-05-25 00:59:28.393049 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 00:59:28.393060 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 00:59:28.393071 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 00:59:28.393082 | orchestrator | 2025-05-25 00:59:28.393092 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-25 00:59:28.393112 | orchestrator | Sunday 25 May 2025 00:46:52 +0000 (0:00:00.677) 0:00:13.228 ************ 2025-05-25 00:59:28.393123 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.393134 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.393145 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.393163 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.393181 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.393200 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.393217 | orchestrator | 2025-05-25 00:59:28.393235 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-25 00:59:28.393253 | orchestrator | Sunday 25 May 2025 00:46:54 +0000 (0:00:01.823) 0:00:15.052 ************ 2025-05-25 00:59:28.393521 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 00:59:28.393549 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 00:59:28.393560 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 00:59:28.393571 | orchestrator | 2025-05-25 00:59:28.393582 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-25 00:59:28.393593 | orchestrator | Sunday 25 May 2025 00:46:57 +0000 (0:00:02.927) 0:00:17.982 ************ 2025-05-25 00:59:28.393604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 00:59:28.393615 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 00:59:28.393625 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 00:59:28.393636 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.393647 | orchestrator | 2025-05-25 00:59:28.393658 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-25 00:59:28.393669 | orchestrator | Sunday 25 May 2025 00:46:57 +0000 (0:00:00.512) 0:00:18.494 ************ 2025-05-25 00:59:28.393682 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-25 00:59:28.393696 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-25 00:59:28.393707 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-25 00:59:28.393743 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.393756 | orchestrator | 2025-05-25 00:59:28.393767 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-25 00:59:28.393778 | orchestrator | Sunday 25 May 2025 00:46:58 +0000 (0:00:00.699) 0:00:19.194 ************ 2025-05-25 00:59:28.393791 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 00:59:28.393805 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 00:59:28.393880 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 00:59:28.393923 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.393935 | orchestrator | 2025-05-25 00:59:28.393946 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-25 00:59:28.394170 | orchestrator | Sunday 25 May 2025 00:46:58 +0000 (0:00:00.279) 0:00:19.473 ************ 2025-05-25 00:59:28.394194 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-25 00:46:55.118619', 'end': '2025-05-25 00:46:55.380757', 'delta': '0:00:00.262138', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-25 00:59:28.394267 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-25 00:46:56.049624', 'end': '2025-05-25 00:46:56.350877', 'delta': '0:00:00.301253', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-25 00:59:28.394280 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-25 00:46:56.932765', 'end': '2025-05-25 00:46:57.229870', 'delta': '0:00:00.297105', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-25 00:59:28.394291 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.394302 | orchestrator | 2025-05-25 00:59:28.394313 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-25 00:59:28.394349 | orchestrator | Sunday 25 May 2025 00:46:59 +0000 (0:00:00.186) 0:00:19.660 ************ 2025-05-25 00:59:28.394367 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.394386 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.394404 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.394422 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.394439 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.394458 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.394476 | orchestrator | 2025-05-25 00:59:28.394495 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-25 00:59:28.394515 | orchestrator | Sunday 25 May 2025 00:47:00 +0000 (0:00:01.822) 0:00:21.482 ************ 2025-05-25 00:59:28.394533 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.394546 | orchestrator | 2025-05-25 00:59:28.394557 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-25 00:59:28.394568 | orchestrator | Sunday 25 May 2025 00:47:01 +0000 (0:00:00.809) 0:00:22.292 ************ 2025-05-25 00:59:28.394579 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.394590 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.394601 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.394612 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.394627 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.394652 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.394674 | orchestrator | 2025-05-25 00:59:28.394691 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-25 00:59:28.394708 | orchestrator | Sunday 25 May 2025 00:47:02 +0000 (0:00:00.790) 0:00:23.082 ************ 2025-05-25 00:59:28.394723 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.394740 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.394757 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.394790 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.394808 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.394828 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.394845 | orchestrator | 2025-05-25 00:59:28.394864 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-25 00:59:28.394875 | orchestrator | Sunday 25 May 2025 00:47:04 +0000 (0:00:02.075) 0:00:25.158 ************ 2025-05-25 00:59:28.394926 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.394939 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.394996 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.395008 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.395019 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.395029 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.395040 | orchestrator | 2025-05-25 00:59:28.395051 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-25 00:59:28.395062 | orchestrator | Sunday 25 May 2025 00:47:05 +0000 (0:00:00.882) 0:00:26.041 ************ 2025-05-25 00:59:28.395085 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.395096 | orchestrator | 2025-05-25 00:59:28.395107 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-25 00:59:28.395117 | orchestrator | Sunday 25 May 2025 00:47:05 +0000 (0:00:00.188) 0:00:26.230 ************ 2025-05-25 00:59:28.395129 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.395140 | orchestrator | 2025-05-25 00:59:28.395151 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-25 00:59:28.395162 | orchestrator | Sunday 25 May 2025 00:47:06 +0000 (0:00:01.262) 0:00:27.493 ************ 2025-05-25 00:59:28.395172 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.395183 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.395194 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.395212 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.395223 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.395234 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.395245 | orchestrator | 2025-05-25 00:59:28.395255 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-25 00:59:28.395266 | orchestrator | Sunday 25 May 2025 00:47:08 +0000 (0:00:01.115) 0:00:28.608 ************ 2025-05-25 00:59:28.395277 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.395288 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.395299 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.395309 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.395320 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.395331 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.395341 | orchestrator | 2025-05-25 00:59:28.395428 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-25 00:59:28.395443 | orchestrator | Sunday 25 May 2025 00:47:09 +0000 (0:00:01.576) 0:00:30.185 ************ 2025-05-25 00:59:28.395455 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.395465 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.395476 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.395487 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.395498 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.395509 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.395520 | orchestrator | 2025-05-25 00:59:28.395530 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-25 00:59:28.395541 | orchestrator | Sunday 25 May 2025 00:47:10 +0000 (0:00:00.854) 0:00:31.039 ************ 2025-05-25 00:59:28.395552 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.395563 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.395574 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.395584 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.395595 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.395606 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.395659 | orchestrator | 2025-05-25 00:59:28.395672 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-25 00:59:28.395683 | orchestrator | Sunday 25 May 2025 00:47:11 +0000 (0:00:01.063) 0:00:32.103 ************ 2025-05-25 00:59:28.395694 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.395705 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.395715 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.395726 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.395737 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.395747 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.395758 | orchestrator | 2025-05-25 00:59:28.395769 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-25 00:59:28.395779 | orchestrator | Sunday 25 May 2025 00:47:12 +0000 (0:00:00.715) 0:00:32.818 ************ 2025-05-25 00:59:28.395790 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.395801 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.395812 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.395822 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.395833 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.395843 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.395854 | orchestrator | 2025-05-25 00:59:28.395865 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-25 00:59:28.395876 | orchestrator | Sunday 25 May 2025 00:47:13 +0000 (0:00:00.988) 0:00:33.806 ************ 2025-05-25 00:59:28.395913 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.395933 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.395951 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.395969 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.395989 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.396006 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.396023 | orchestrator | 2025-05-25 00:59:28.396035 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-25 00:59:28.396046 | orchestrator | Sunday 25 May 2025 00:47:14 +0000 (0:00:00.820) 0:00:34.627 ************ 2025-05-25 00:59:28.396108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946', 'scsi-SQEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part1', 'scsi-SQEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part14', 'scsi-SQEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part15', 'scsi-SQEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part16', 'scsi-SQEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.396278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.396323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396380 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.396391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fe04323-1149-4ab9-818d-0974511b9fdf', 'scsi-SQEMU_QEMU_HARDDISK_9fe04323-1149-4ab9-818d-0974511b9fdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fe04323-1149-4ab9-818d-0974511b9fdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_9fe04323-1149-4ab9-818d-0974511b9fdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fe04323-1149-4ab9-818d-0974511b9fdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_9fe04323-1149-4ab9-818d-0974511b9fdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fe04323-1149-4ab9-818d-0974511b9fdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_9fe04323-1149-4ab9-818d-0974511b9fdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fe04323-1149-4ab9-818d-0974511b9fdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_9fe04323-1149-4ab9-818d-0974511b9fdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.396466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-00-01-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.396478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396621 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.396641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6eb15f43-4781-46a1-a915-57ab21ed02ae', 'scsi-SQEMU_QEMU_HARDDISK_6eb15f43-4781-46a1-a915-57ab21ed02ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6eb15f43-4781-46a1-a915-57ab21ed02ae-part1', 'scsi-SQEMU_QEMU_HARDDISK_6eb15f43-4781-46a1-a915-57ab21ed02ae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6eb15f43-4781-46a1-a915-57ab21ed02ae-part14', 'scsi-SQEMU_QEMU_HARDDISK_6eb15f43-4781-46a1-a915-57ab21ed02ae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6eb15f43-4781-46a1-a915-57ab21ed02ae-part15', 'scsi-SQEMU_QEMU_HARDDISK_6eb15f43-4781-46a1-a915-57ab21ed02ae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6eb15f43-4781-46a1-a915-57ab21ed02ae-part16', 'scsi-SQEMU_QEMU_HARDDISK_6eb15f43-4781-46a1-a915-57ab21ed02ae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.396730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-00-02-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.396743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91dc6ac0--e554--5716--a575--6858f2de7d62-osd--block--91dc6ac0--e554--5716--a575--6858f2de7d62', 'dm-uuid-LVM-d2a8l8sOV5VaZWIt9G7ovWvisC1s7hAtkaOlpTjYrMuvpi5viCvgA2HhfZ5QURWB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a344b0dc--179a--5809--8fe1--9e4cbc2dd42d-osd--block--a344b0dc--179a--5809--8fe1--9e4cbc2dd42d', 'dm-uuid-LVM-bgBKeIFkqQz7hEmPHFLD4eddfhEdiUhgcC2wMt1sJ6yrzyd9TmpOW67kWK8imV82'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396838 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.396854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.396949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b', 'scsi-SQEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.396972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--91dc6ac0--e554--5716--a575--6858f2de7d62-osd--block--91dc6ac0--e554--5716--a575--6858f2de7d62'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xHqAVA-Cejf-7gr9-zFPT-wav0-5qxU-8HJN3e', 'scsi-0QEMU_QEMU_HARDDISK_b4cdb2bf-93fc-4f18-bc4f-5ab68c384bd6', 'scsi-SQEMU_QEMU_HARDDISK_b4cdb2bf-93fc-4f18-bc4f-5ab68c384bd6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.396999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a344b0dc--179a--5809--8fe1--9e4cbc2dd42d-osd--block--a344b0dc--179a--5809--8fe1--9e4cbc2dd42d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HflusL-wpGE-znvH-o1Mb-iAxF-aSGd-BxXSJA', 'scsi-0QEMU_QEMU_HARDDISK_5d6b2858-a2bf-4730-a36e-7c509d6038b8', 'scsi-SQEMU_QEMU_HARDDISK_5d6b2858-a2bf-4730-a36e-7c509d6038b8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.397012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f90c35ea-44f5-4677-8ded-e7e6ddf8d55d', 'scsi-SQEMU_QEMU_HARDDISK_f90c35ea-44f5-4677-8ded-e7e6ddf8d55d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.397025 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--86509461--9ff7--5f8d--a545--2dedda0a1471-osd--block--86509461--9ff7--5f8d--a545--2dedda0a1471', 'dm-uuid-LVM-hwcAG3bjg1BWKJHBga7T8xw0rHFgX4cD6LJQfnPgjPrIDFi2RgRkiw5AKUlsTRZt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-00-01-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.397048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1f6e0dcd--8614--5501--94b8--6b816e10f3a3-osd--block--1f6e0dcd--8614--5501--94b8--6b816e10f3a3', 'dm-uuid-LVM-C8PVX8qDQcM983z2QfAqCXD6yhsbuEq55ZrIlDvU6m19z1XleSOVq3exFBZsP3Nb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397135 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.397146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357', 'scsi-SQEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part1', 'scsi-SQEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part14', 'scsi-SQEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part15', 'scsi-SQEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part16', 'scsi-SQEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.397213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--86509461--9ff7--5f8d--a545--2dedda0a1471-osd--block--86509461--9ff7--5f8d--a545--2dedda0a1471'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oHsvLZ-eiHr-bhxG-uPy5-zdll-keK3-s9azWZ', 'scsi-0QEMU_QEMU_HARDDISK_a7a2bb5e-544e-42c6-9dad-0ece7cbc632c', 'scsi-SQEMU_QEMU_HARDDISK_a7a2bb5e-544e-42c6-9dad-0ece7cbc632c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.397225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1f6e0dcd--8614--5501--94b8--6b816e10f3a3-osd--block--1f6e0dcd--8614--5501--94b8--6b816e10f3a3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lsgh6i-v8WU-otvP-1ReA-wzwN-nPO4-etH5Z4', 'scsi-0QEMU_QEMU_HARDDISK_45989edd-037d-47c1-af48-ae55f96e814d', 'scsi-SQEMU_QEMU_HARDDISK_45989edd-037d-47c1-af48-ae55f96e814d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.397236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00903628-efdf-425a-bac1-d89af04936e9', 'scsi-SQEMU_QEMU_HARDDISK_00903628-efdf-425a-bac1-d89af04936e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.397248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.397265 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.397282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f34e313d--bca1--5ff8--8346--de91d98588f2-osd--block--f34e313d--bca1--5ff8--8346--de91d98588f2', 'dm-uuid-LVM-qzCBpHI6u1zR1tGPZw4KwHds2G6YtCfIbBzXT9BeEmg8kAhbPEy11F8gyaE9dmNs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a31c7786--f287--566f--81cf--65786b8dbda6-osd--block--a31c7786--f287--566f--81cf--65786b8dbda6', 'dm-uuid-LVM-jeE7HgaYYNiTQ2Cdr5ptN5Bi6GUMjUK5bGPqGwiAscPmEeOlmdmjysTWSdrPwyUC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 00:59:28.397416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1', 'scsi-SQEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.397427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f34e313d--bca1--5ff8--8346--de91d98588f2-osd--block--f34e313d--bca1--5ff8--8346--de91d98588f2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CITy27-5Akz-gmxl-ss4O-c7b5-eSzN-ksQvPt', 'scsi-0QEMU_QEMU_HARDDISK_5104b556-d7c3-42e9-9230-39ae2abd74e9', 'scsi-SQEMU_QEMU_HARDDISK_5104b556-d7c3-42e9-9230-39ae2abd74e9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.397443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a31c7786--f287--566f--81cf--65786b8dbda6-osd--block--a31c7786--f287--566f--81cf--65786b8dbda6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1bhDeb-HRF6-15Pb-u6KA-690z-4Xkb-oXjYuE', 'scsi-0QEMU_QEMU_HARDDISK_a4234bd8-7c33-4d3a-bb78-5919196abab5', 'scsi-SQEMU_QEMU_HARDDISK_a4234bd8-7c33-4d3a-bb78-5919196abab5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.397459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70c7a39a-01cf-4431-b65e-7bc8a8e29825', 'scsi-SQEMU_QEMU_HARDDISK_70c7a39a-01cf-4431-b65e-7bc8a8e29825'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.397474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 00:59:28.397484 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.397494 | orchestrator | 2025-05-25 00:59:28.397504 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-25 00:59:28.397514 | orchestrator | Sunday 25 May 2025 00:47:16 +0000 (0:00:01.938) 0:00:36.565 ************ 2025-05-25 00:59:28.397524 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.397533 | orchestrator | 2025-05-25 00:59:28.397543 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-25 00:59:28.397552 | orchestrator | Sunday 25 May 2025 00:47:16 +0000 (0:00:00.293) 0:00:36.859 ************ 2025-05-25 00:59:28.397562 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.397572 | orchestrator | 2025-05-25 00:59:28.397581 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-25 00:59:28.397591 | orchestrator | Sunday 25 May 2025 00:47:16 +0000 (0:00:00.183) 0:00:37.043 ************ 2025-05-25 00:59:28.397600 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.397610 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.397619 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.397629 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.397638 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.397648 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.397657 | orchestrator | 2025-05-25 00:59:28.397668 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-25 00:59:28.397684 | orchestrator | Sunday 25 May 2025 00:47:17 +0000 (0:00:01.061) 0:00:38.104 ************ 2025-05-25 00:59:28.397700 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.397716 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.397732 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.397747 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.397762 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.397778 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.397803 | orchestrator | 2025-05-25 00:59:28.397820 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-25 00:59:28.397837 | orchestrator | Sunday 25 May 2025 00:47:19 +0000 (0:00:01.613) 0:00:39.717 ************ 2025-05-25 00:59:28.397847 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.397856 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.397870 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.397946 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.397966 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.397982 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.397999 | orchestrator | 2025-05-25 00:59:28.398014 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-25 00:59:28.398085 | orchestrator | Sunday 25 May 2025 00:47:20 +0000 (0:00:00.944) 0:00:40.661 ************ 2025-05-25 00:59:28.398095 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.398102 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.398110 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.398118 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.398126 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.398133 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.398141 | orchestrator | 2025-05-25 00:59:28.398149 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-25 00:59:28.398157 | orchestrator | Sunday 25 May 2025 00:47:21 +0000 (0:00:01.004) 0:00:41.666 ************ 2025-05-25 00:59:28.398165 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.398183 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.398192 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.398208 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.398216 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.398224 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.398232 | orchestrator | 2025-05-25 00:59:28.398240 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-25 00:59:28.398247 | orchestrator | Sunday 25 May 2025 00:47:21 +0000 (0:00:00.737) 0:00:42.403 ************ 2025-05-25 00:59:28.398255 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.398263 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.398270 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.398278 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.398286 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.398293 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.398301 | orchestrator | 2025-05-25 00:59:28.398309 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-25 00:59:28.398316 | orchestrator | Sunday 25 May 2025 00:47:22 +0000 (0:00:01.032) 0:00:43.436 ************ 2025-05-25 00:59:28.398324 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.398332 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.398339 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.398347 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.398355 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.398363 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.398370 | orchestrator | 2025-05-25 00:59:28.398378 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-25 00:59:28.398385 | orchestrator | Sunday 25 May 2025 00:47:23 +0000 (0:00:00.820) 0:00:44.256 ************ 2025-05-25 00:59:28.398393 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 00:59:28.398417 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 00:59:28.398425 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-25 00:59:28.398433 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-25 00:59:28.398440 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-25 00:59:28.398448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 00:59:28.398456 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.398471 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-25 00:59:28.398479 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-25 00:59:28.398487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 00:59:28.398500 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.398508 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-25 00:59:28.398515 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.398523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 00:59:28.398531 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-25 00:59:28.398538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 00:59:28.398546 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.398554 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-25 00:59:28.398561 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-25 00:59:28.398569 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-25 00:59:28.398577 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-25 00:59:28.398584 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.398592 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-25 00:59:28.398600 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.398609 | orchestrator | 2025-05-25 00:59:28.398622 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-25 00:59:28.398636 | orchestrator | Sunday 25 May 2025 00:47:26 +0000 (0:00:02.433) 0:00:46.689 ************ 2025-05-25 00:59:28.398649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 00:59:28.398662 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 00:59:28.398675 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-25 00:59:28.398686 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-25 00:59:28.398700 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 00:59:28.398713 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.398725 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-25 00:59:28.398733 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-25 00:59:28.398761 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.398769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 00:59:28.398777 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-25 00:59:28.398785 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-25 00:59:28.398792 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-25 00:59:28.398800 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.398808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 00:59:28.398815 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-25 00:59:28.398823 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-25 00:59:28.398831 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.398838 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-25 00:59:28.398846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 00:59:28.398854 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.398861 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-25 00:59:28.398869 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-25 00:59:28.398877 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.398902 | orchestrator | 2025-05-25 00:59:28.398915 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-25 00:59:28.398924 | orchestrator | Sunday 25 May 2025 00:47:28 +0000 (0:00:02.342) 0:00:49.032 ************ 2025-05-25 00:59:28.398932 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-25 00:59:28.398946 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 00:59:28.398954 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-25 00:59:28.398962 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-25 00:59:28.398969 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-25 00:59:28.398977 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-25 00:59:28.398985 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-25 00:59:28.398992 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-25 00:59:28.399000 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-25 00:59:28.399008 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-25 00:59:28.399015 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-25 00:59:28.399023 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-25 00:59:28.399031 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-25 00:59:28.399038 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-25 00:59:28.399046 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-25 00:59:28.399054 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-25 00:59:28.399062 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-25 00:59:28.399069 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-25 00:59:28.399077 | orchestrator | 2025-05-25 00:59:28.399085 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-25 00:59:28.399099 | orchestrator | Sunday 25 May 2025 00:47:33 +0000 (0:00:04.532) 0:00:53.564 ************ 2025-05-25 00:59:28.399107 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 00:59:28.399115 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 00:59:28.399122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 00:59:28.399130 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.399138 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-25 00:59:28.399146 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-25 00:59:28.399153 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-25 00:59:28.399166 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-25 00:59:28.399174 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.399181 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-25 00:59:28.399189 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-25 00:59:28.399197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 00:59:28.399205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 00:59:28.399212 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.399220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 00:59:28.399228 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-25 00:59:28.399236 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-25 00:59:28.399243 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-25 00:59:28.399251 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.399259 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.399266 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-25 00:59:28.399274 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-25 00:59:28.399284 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-25 00:59:28.399299 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.399318 | orchestrator | 2025-05-25 00:59:28.399330 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-25 00:59:28.399342 | orchestrator | Sunday 25 May 2025 00:47:34 +0000 (0:00:01.203) 0:00:54.768 ************ 2025-05-25 00:59:28.399364 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 00:59:28.399376 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 00:59:28.399389 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 00:59:28.399401 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-25 00:59:28.399414 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-25 00:59:28.399428 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.399440 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-25 00:59:28.399453 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-25 00:59:28.399464 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-25 00:59:28.399471 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-25 00:59:28.399479 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.399487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 00:59:28.399495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 00:59:28.399502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 00:59:28.399510 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.399518 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-25 00:59:28.399526 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.399534 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-25 00:59:28.399541 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-25 00:59:28.399549 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.399557 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-25 00:59:28.399565 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-25 00:59:28.399572 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-25 00:59:28.399580 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.399588 | orchestrator | 2025-05-25 00:59:28.399596 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-25 00:59:28.399603 | orchestrator | Sunday 25 May 2025 00:47:35 +0000 (0:00:01.124) 0:00:55.893 ************ 2025-05-25 00:59:28.399611 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-25 00:59:28.399619 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-25 00:59:28.399628 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-25 00:59:28.399636 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-25 00:59:28.399644 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-25 00:59:28.399652 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-25 00:59:28.399659 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-25 00:59:28.399667 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-25 00:59:28.399675 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-25 00:59:28.399689 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-25 00:59:28.399697 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-25 00:59:28.399704 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-25 00:59:28.399712 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.399720 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-25 00:59:28.399728 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-25 00:59:28.399747 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-25 00:59:28.399755 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.399763 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-25 00:59:28.399771 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-25 00:59:28.399779 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-25 00:59:28.399786 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.399794 | orchestrator | 2025-05-25 00:59:28.399802 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-25 00:59:28.399810 | orchestrator | Sunday 25 May 2025 00:47:36 +0000 (0:00:01.544) 0:00:57.437 ************ 2025-05-25 00:59:28.399818 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.399825 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.399833 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.399841 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.399849 | orchestrator | 2025-05-25 00:59:28.399857 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-25 00:59:28.399865 | orchestrator | Sunday 25 May 2025 00:47:38 +0000 (0:00:01.387) 0:00:58.825 ************ 2025-05-25 00:59:28.399873 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.399881 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.399908 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.399916 | orchestrator | 2025-05-25 00:59:28.399924 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-25 00:59:28.399932 | orchestrator | Sunday 25 May 2025 00:47:38 +0000 (0:00:00.650) 0:00:59.475 ************ 2025-05-25 00:59:28.399940 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.399948 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.399956 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.399964 | orchestrator | 2025-05-25 00:59:28.399971 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-25 00:59:28.399979 | orchestrator | Sunday 25 May 2025 00:47:39 +0000 (0:00:00.688) 0:01:00.164 ************ 2025-05-25 00:59:28.399987 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.399995 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.400003 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.400010 | orchestrator | 2025-05-25 00:59:28.400018 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-25 00:59:28.400026 | orchestrator | Sunday 25 May 2025 00:47:40 +0000 (0:00:00.698) 0:01:00.862 ************ 2025-05-25 00:59:28.400034 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.400046 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.400064 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.400081 | orchestrator | 2025-05-25 00:59:28.400093 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-25 00:59:28.400106 | orchestrator | Sunday 25 May 2025 00:47:41 +0000 (0:00:00.985) 0:01:01.847 ************ 2025-05-25 00:59:28.400119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.400132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.400145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.400160 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.400172 | orchestrator | 2025-05-25 00:59:28.400184 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-25 00:59:28.400192 | orchestrator | Sunday 25 May 2025 00:47:42 +0000 (0:00:00.862) 0:01:02.710 ************ 2025-05-25 00:59:28.400200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.400215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.400223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.400231 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.400239 | orchestrator | 2025-05-25 00:59:28.400246 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-25 00:59:28.400254 | orchestrator | Sunday 25 May 2025 00:47:42 +0000 (0:00:00.697) 0:01:03.408 ************ 2025-05-25 00:59:28.400262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.400270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.400278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.400285 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.400293 | orchestrator | 2025-05-25 00:59:28.400301 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.400309 | orchestrator | Sunday 25 May 2025 00:47:43 +0000 (0:00:00.890) 0:01:04.298 ************ 2025-05-25 00:59:28.400316 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.400324 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.400332 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.400340 | orchestrator | 2025-05-25 00:59:28.400348 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-25 00:59:28.400362 | orchestrator | Sunday 25 May 2025 00:47:44 +0000 (0:00:00.532) 0:01:04.830 ************ 2025-05-25 00:59:28.400370 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-25 00:59:28.400378 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-25 00:59:28.400385 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-25 00:59:28.400393 | orchestrator | 2025-05-25 00:59:28.400401 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-25 00:59:28.400409 | orchestrator | Sunday 25 May 2025 00:47:45 +0000 (0:00:00.993) 0:01:05.824 ************ 2025-05-25 00:59:28.400417 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.400424 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.400432 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.400440 | orchestrator | 2025-05-25 00:59:28.400453 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.400462 | orchestrator | Sunday 25 May 2025 00:47:45 +0000 (0:00:00.635) 0:01:06.459 ************ 2025-05-25 00:59:28.400469 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.400477 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.400485 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.400493 | orchestrator | 2025-05-25 00:59:28.400501 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-25 00:59:28.400509 | orchestrator | Sunday 25 May 2025 00:47:46 +0000 (0:00:00.696) 0:01:07.156 ************ 2025-05-25 00:59:28.400517 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-25 00:59:28.400524 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.400532 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-25 00:59:28.400540 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-25 00:59:28.400548 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.400555 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.400563 | orchestrator | 2025-05-25 00:59:28.400571 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-25 00:59:28.400579 | orchestrator | Sunday 25 May 2025 00:47:47 +0000 (0:00:00.750) 0:01:07.907 ************ 2025-05-25 00:59:28.400587 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.400595 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.400603 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.400611 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.400630 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.400638 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.400646 | orchestrator | 2025-05-25 00:59:28.400654 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-25 00:59:28.400661 | orchestrator | Sunday 25 May 2025 00:47:48 +0000 (0:00:00.939) 0:01:08.847 ************ 2025-05-25 00:59:28.400669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.400677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.400685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.400693 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.400700 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-25 00:59:28.400708 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-25 00:59:28.400716 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-25 00:59:28.400724 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-25 00:59:28.400731 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.400739 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-25 00:59:28.400747 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-25 00:59:28.400755 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.400762 | orchestrator | 2025-05-25 00:59:28.400770 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-25 00:59:28.400778 | orchestrator | Sunday 25 May 2025 00:47:49 +0000 (0:00:01.188) 0:01:10.035 ************ 2025-05-25 00:59:28.400786 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.400794 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.400802 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.400809 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.400817 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.400825 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.400833 | orchestrator | 2025-05-25 00:59:28.400840 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-25 00:59:28.400848 | orchestrator | Sunday 25 May 2025 00:47:50 +0000 (0:00:00.688) 0:01:10.724 ************ 2025-05-25 00:59:28.400856 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 00:59:28.400864 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 00:59:28.400872 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 00:59:28.400880 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-25 00:59:28.400934 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-25 00:59:28.400944 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-25 00:59:28.400951 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-25 00:59:28.400959 | orchestrator | 2025-05-25 00:59:28.400967 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-25 00:59:28.400975 | orchestrator | Sunday 25 May 2025 00:47:50 +0000 (0:00:00.811) 0:01:11.535 ************ 2025-05-25 00:59:28.400982 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 00:59:28.400996 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 00:59:28.401004 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 00:59:28.401012 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-25 00:59:28.401019 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-25 00:59:28.401027 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-25 00:59:28.401046 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-25 00:59:28.401054 | orchestrator | 2025-05-25 00:59:28.401062 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-25 00:59:28.401070 | orchestrator | Sunday 25 May 2025 00:47:52 +0000 (0:00:01.631) 0:01:13.167 ************ 2025-05-25 00:59:28.401078 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.401086 | orchestrator | 2025-05-25 00:59:28.401094 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-25 00:59:28.401102 | orchestrator | Sunday 25 May 2025 00:47:53 +0000 (0:00:01.105) 0:01:14.272 ************ 2025-05-25 00:59:28.401109 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.401117 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.401125 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.401133 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.401140 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.401148 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.401156 | orchestrator | 2025-05-25 00:59:28.401164 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-25 00:59:28.401178 | orchestrator | Sunday 25 May 2025 00:47:54 +0000 (0:00:01.008) 0:01:15.281 ************ 2025-05-25 00:59:28.401191 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.401204 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.401216 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.401229 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.401242 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.401255 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.401268 | orchestrator | 2025-05-25 00:59:28.401281 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-25 00:59:28.401294 | orchestrator | Sunday 25 May 2025 00:47:56 +0000 (0:00:01.294) 0:01:16.576 ************ 2025-05-25 00:59:28.401306 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.401315 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.401322 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.401329 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.401336 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.401342 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.401349 | orchestrator | 2025-05-25 00:59:28.401355 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-25 00:59:28.401362 | orchestrator | Sunday 25 May 2025 00:47:57 +0000 (0:00:01.332) 0:01:17.909 ************ 2025-05-25 00:59:28.401369 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.401375 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.401382 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.401388 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.401395 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.401401 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.401408 | orchestrator | 2025-05-25 00:59:28.401415 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-25 00:59:28.401421 | orchestrator | Sunday 25 May 2025 00:47:58 +0000 (0:00:01.420) 0:01:19.329 ************ 2025-05-25 00:59:28.401428 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.401434 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.401441 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.401448 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.401454 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.401461 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.401467 | orchestrator | 2025-05-25 00:59:28.401474 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-25 00:59:28.401480 | orchestrator | Sunday 25 May 2025 00:47:59 +0000 (0:00:01.160) 0:01:20.490 ************ 2025-05-25 00:59:28.401493 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.401499 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.401506 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.401513 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.401519 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.401526 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.401532 | orchestrator | 2025-05-25 00:59:28.401539 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-25 00:59:28.401545 | orchestrator | Sunday 25 May 2025 00:48:00 +0000 (0:00:00.819) 0:01:21.309 ************ 2025-05-25 00:59:28.401552 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.401560 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.401571 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.401588 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.401599 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.401609 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.401619 | orchestrator | 2025-05-25 00:59:28.401630 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-25 00:59:28.401640 | orchestrator | Sunday 25 May 2025 00:48:02 +0000 (0:00:01.437) 0:01:22.746 ************ 2025-05-25 00:59:28.401651 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.401661 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.401672 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.401684 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.401695 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.401706 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.401714 | orchestrator | 2025-05-25 00:59:28.401721 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-25 00:59:28.401728 | orchestrator | Sunday 25 May 2025 00:48:02 +0000 (0:00:00.741) 0:01:23.488 ************ 2025-05-25 00:59:28.401741 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.401747 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.401754 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.401761 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.401767 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.401774 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.401780 | orchestrator | 2025-05-25 00:59:28.401787 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-25 00:59:28.401793 | orchestrator | Sunday 25 May 2025 00:48:03 +0000 (0:00:00.839) 0:01:24.328 ************ 2025-05-25 00:59:28.401800 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.401806 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.401813 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.401824 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.401831 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.401838 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.401844 | orchestrator | 2025-05-25 00:59:28.401851 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-25 00:59:28.401858 | orchestrator | Sunday 25 May 2025 00:48:04 +0000 (0:00:00.763) 0:01:25.092 ************ 2025-05-25 00:59:28.401864 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.401871 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.401877 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.401884 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.401909 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.401916 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.401922 | orchestrator | 2025-05-25 00:59:28.401929 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-25 00:59:28.401936 | orchestrator | Sunday 25 May 2025 00:48:06 +0000 (0:00:01.537) 0:01:26.629 ************ 2025-05-25 00:59:28.401942 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.401949 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.401955 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.401962 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.401975 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.401981 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.401988 | orchestrator | 2025-05-25 00:59:28.401994 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-25 00:59:28.402001 | orchestrator | Sunday 25 May 2025 00:48:06 +0000 (0:00:00.602) 0:01:27.231 ************ 2025-05-25 00:59:28.402008 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.402014 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.402043 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.402050 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.402057 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.402063 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.402070 | orchestrator | 2025-05-25 00:59:28.402076 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-25 00:59:28.402083 | orchestrator | Sunday 25 May 2025 00:48:07 +0000 (0:00:00.784) 0:01:28.016 ************ 2025-05-25 00:59:28.402089 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.402096 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.402102 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.402109 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.402116 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.402122 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.402129 | orchestrator | 2025-05-25 00:59:28.402135 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-25 00:59:28.402142 | orchestrator | Sunday 25 May 2025 00:48:08 +0000 (0:00:00.600) 0:01:28.617 ************ 2025-05-25 00:59:28.402148 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.402155 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.402161 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.402168 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.402175 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.402181 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.402188 | orchestrator | 2025-05-25 00:59:28.402194 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-25 00:59:28.402201 | orchestrator | Sunday 25 May 2025 00:48:08 +0000 (0:00:00.924) 0:01:29.541 ************ 2025-05-25 00:59:28.402208 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.402214 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.402221 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.402227 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.402234 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.402240 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.402247 | orchestrator | 2025-05-25 00:59:28.402254 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-25 00:59:28.402260 | orchestrator | Sunday 25 May 2025 00:48:09 +0000 (0:00:00.643) 0:01:30.185 ************ 2025-05-25 00:59:28.402267 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.402273 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.402280 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.402286 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.402293 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.402299 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.402306 | orchestrator | 2025-05-25 00:59:28.402313 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-25 00:59:28.402324 | orchestrator | Sunday 25 May 2025 00:48:10 +0000 (0:00:00.877) 0:01:31.063 ************ 2025-05-25 00:59:28.402340 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.402353 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.402364 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.402374 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.402384 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.402395 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.402406 | orchestrator | 2025-05-25 00:59:28.402418 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-25 00:59:28.402437 | orchestrator | Sunday 25 May 2025 00:48:11 +0000 (0:00:00.572) 0:01:31.635 ************ 2025-05-25 00:59:28.402449 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.402460 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.402471 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.402480 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.402487 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.402494 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.402500 | orchestrator | 2025-05-25 00:59:28.402507 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-25 00:59:28.402520 | orchestrator | Sunday 25 May 2025 00:48:11 +0000 (0:00:00.895) 0:01:32.530 ************ 2025-05-25 00:59:28.402526 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.402533 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.402540 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.402546 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.402553 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.402559 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.402566 | orchestrator | 2025-05-25 00:59:28.402572 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-25 00:59:28.402579 | orchestrator | Sunday 25 May 2025 00:48:12 +0000 (0:00:00.817) 0:01:33.348 ************ 2025-05-25 00:59:28.402586 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.402592 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.402613 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.402620 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.402627 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.402633 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.402640 | orchestrator | 2025-05-25 00:59:28.402646 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-25 00:59:28.402653 | orchestrator | Sunday 25 May 2025 00:48:13 +0000 (0:00:00.999) 0:01:34.348 ************ 2025-05-25 00:59:28.402660 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.402666 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.402673 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.402679 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.402686 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.402692 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.402699 | orchestrator | 2025-05-25 00:59:28.402705 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-25 00:59:28.402712 | orchestrator | Sunday 25 May 2025 00:48:14 +0000 (0:00:00.569) 0:01:34.917 ************ 2025-05-25 00:59:28.402718 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.402725 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.402731 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.402738 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.402744 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.402751 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.402757 | orchestrator | 2025-05-25 00:59:28.402764 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-25 00:59:28.402771 | orchestrator | Sunday 25 May 2025 00:48:15 +0000 (0:00:00.835) 0:01:35.753 ************ 2025-05-25 00:59:28.402777 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.402784 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.402790 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.402797 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.402803 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.402812 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.402826 | orchestrator | 2025-05-25 00:59:28.402842 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-25 00:59:28.402853 | orchestrator | Sunday 25 May 2025 00:48:15 +0000 (0:00:00.621) 0:01:36.375 ************ 2025-05-25 00:59:28.402863 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.402880 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.402910 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.402921 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.402931 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.402941 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.402951 | orchestrator | 2025-05-25 00:59:28.402962 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-25 00:59:28.402973 | orchestrator | Sunday 25 May 2025 00:48:16 +0000 (0:00:00.816) 0:01:37.191 ************ 2025-05-25 00:59:28.402983 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.402995 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.403006 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.403017 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.403025 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.403032 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.403038 | orchestrator | 2025-05-25 00:59:28.403045 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-25 00:59:28.403052 | orchestrator | Sunday 25 May 2025 00:48:17 +0000 (0:00:00.605) 0:01:37.797 ************ 2025-05-25 00:59:28.403059 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.403065 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.403071 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.403078 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.403084 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.403091 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.403097 | orchestrator | 2025-05-25 00:59:28.403104 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-25 00:59:28.403111 | orchestrator | Sunday 25 May 2025 00:48:18 +0000 (0:00:00.811) 0:01:38.609 ************ 2025-05-25 00:59:28.403118 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.403124 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.403130 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.403137 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.403143 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.403150 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.403156 | orchestrator | 2025-05-25 00:59:28.403163 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-25 00:59:28.403170 | orchestrator | Sunday 25 May 2025 00:48:18 +0000 (0:00:00.641) 0:01:39.251 ************ 2025-05-25 00:59:28.403176 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.403183 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.403189 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.403195 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.403202 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.403208 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.403215 | orchestrator | 2025-05-25 00:59:28.403222 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-25 00:59:28.403228 | orchestrator | Sunday 25 May 2025 00:48:19 +0000 (0:00:00.793) 0:01:40.044 ************ 2025-05-25 00:59:28.403235 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.403242 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.403254 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.403261 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.403267 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.403274 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.403280 | orchestrator | 2025-05-25 00:59:28.403287 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-25 00:59:28.403293 | orchestrator | Sunday 25 May 2025 00:48:20 +0000 (0:00:00.666) 0:01:40.711 ************ 2025-05-25 00:59:28.403300 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.403307 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.403320 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.403327 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.403333 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.403340 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.403346 | orchestrator | 2025-05-25 00:59:28.403353 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-25 00:59:28.403360 | orchestrator | Sunday 25 May 2025 00:48:21 +0000 (0:00:00.987) 0:01:41.699 ************ 2025-05-25 00:59:28.403367 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.403373 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.403380 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.403386 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.403393 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.403399 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.403406 | orchestrator | 2025-05-25 00:59:28.403412 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-25 00:59:28.403419 | orchestrator | Sunday 25 May 2025 00:48:21 +0000 (0:00:00.825) 0:01:42.524 ************ 2025-05-25 00:59:28.403426 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-25 00:59:28.403432 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-25 00:59:28.403439 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.403445 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-25 00:59:28.403452 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-25 00:59:28.403459 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.403471 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-25 00:59:28.403487 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-25 00:59:28.403500 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.403510 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-25 00:59:28.403520 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-25 00:59:28.403531 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.403542 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-25 00:59:28.403553 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-25 00:59:28.403564 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.403575 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-25 00:59:28.403587 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-25 00:59:28.403597 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.403604 | orchestrator | 2025-05-25 00:59:28.403610 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-25 00:59:28.403617 | orchestrator | Sunday 25 May 2025 00:48:22 +0000 (0:00:00.922) 0:01:43.447 ************ 2025-05-25 00:59:28.403624 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-25 00:59:28.403630 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-25 00:59:28.403637 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-25 00:59:28.403643 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-25 00:59:28.403650 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.403656 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-25 00:59:28.403663 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-25 00:59:28.403669 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.403676 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-25 00:59:28.403683 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-25 00:59:28.403689 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.403696 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-25 00:59:28.403702 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-25 00:59:28.403709 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.403715 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.403792 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-25 00:59:28.403817 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-25 00:59:28.403824 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.403830 | orchestrator | 2025-05-25 00:59:28.403837 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-25 00:59:28.403844 | orchestrator | Sunday 25 May 2025 00:48:23 +0000 (0:00:00.712) 0:01:44.160 ************ 2025-05-25 00:59:28.403850 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.403966 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.403976 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.403983 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.403990 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.403996 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.404003 | orchestrator | 2025-05-25 00:59:28.404010 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-25 00:59:28.404016 | orchestrator | Sunday 25 May 2025 00:48:24 +0000 (0:00:00.869) 0:01:45.030 ************ 2025-05-25 00:59:28.404023 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404030 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.404036 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.404043 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.404049 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.404056 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.404063 | orchestrator | 2025-05-25 00:59:28.404070 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-25 00:59:28.404085 | orchestrator | Sunday 25 May 2025 00:48:25 +0000 (0:00:00.684) 0:01:45.714 ************ 2025-05-25 00:59:28.404092 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404098 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.404105 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.404111 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.404118 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.404125 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.404131 | orchestrator | 2025-05-25 00:59:28.404138 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-25 00:59:28.404144 | orchestrator | Sunday 25 May 2025 00:48:26 +0000 (0:00:00.991) 0:01:46.706 ************ 2025-05-25 00:59:28.404151 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404157 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.404169 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.404176 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.404182 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.404189 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.404195 | orchestrator | 2025-05-25 00:59:28.404202 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-25 00:59:28.404209 | orchestrator | Sunday 25 May 2025 00:48:26 +0000 (0:00:00.828) 0:01:47.534 ************ 2025-05-25 00:59:28.404216 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404222 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.404229 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.404235 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.404242 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.404248 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.404255 | orchestrator | 2025-05-25 00:59:28.404261 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-25 00:59:28.404268 | orchestrator | Sunday 25 May 2025 00:48:28 +0000 (0:00:01.099) 0:01:48.634 ************ 2025-05-25 00:59:28.404275 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404281 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.404288 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.404294 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.404301 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.404313 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.404320 | orchestrator | 2025-05-25 00:59:28.404327 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-25 00:59:28.404333 | orchestrator | Sunday 25 May 2025 00:48:28 +0000 (0:00:00.830) 0:01:49.464 ************ 2025-05-25 00:59:28.404340 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.404346 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.404353 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.404359 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404366 | orchestrator | 2025-05-25 00:59:28.404373 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-25 00:59:28.404379 | orchestrator | Sunday 25 May 2025 00:48:30 +0000 (0:00:01.183) 0:01:50.648 ************ 2025-05-25 00:59:28.404412 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.404419 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.404425 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.404432 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404439 | orchestrator | 2025-05-25 00:59:28.404445 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-25 00:59:28.404452 | orchestrator | Sunday 25 May 2025 00:48:30 +0000 (0:00:00.493) 0:01:51.141 ************ 2025-05-25 00:59:28.404476 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.404483 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.404490 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.404496 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404503 | orchestrator | 2025-05-25 00:59:28.404510 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.404516 | orchestrator | Sunday 25 May 2025 00:48:30 +0000 (0:00:00.394) 0:01:51.536 ************ 2025-05-25 00:59:28.404522 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404528 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.404534 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.404540 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.404547 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.404553 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.404559 | orchestrator | 2025-05-25 00:59:28.404565 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-25 00:59:28.404588 | orchestrator | Sunday 25 May 2025 00:48:31 +0000 (0:00:00.628) 0:01:52.164 ************ 2025-05-25 00:59:28.404595 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-25 00:59:28.404601 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404607 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-25 00:59:28.404613 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-25 00:59:28.404619 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.404625 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-25 00:59:28.404631 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.404637 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.404643 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-25 00:59:28.404650 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.404656 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-25 00:59:28.404662 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.404684 | orchestrator | 2025-05-25 00:59:28.404691 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-25 00:59:28.404697 | orchestrator | Sunday 25 May 2025 00:48:32 +0000 (0:00:01.084) 0:01:53.249 ************ 2025-05-25 00:59:28.404722 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404729 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.404735 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.404746 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.404752 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.404775 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.404781 | orchestrator | 2025-05-25 00:59:28.404792 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.404798 | orchestrator | Sunday 25 May 2025 00:48:33 +0000 (0:00:00.617) 0:01:53.866 ************ 2025-05-25 00:59:28.404805 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404811 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.404817 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.404823 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.404829 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.404836 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.404842 | orchestrator | 2025-05-25 00:59:28.404848 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-25 00:59:28.404858 | orchestrator | Sunday 25 May 2025 00:48:34 +0000 (0:00:00.816) 0:01:54.682 ************ 2025-05-25 00:59:28.404864 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-25 00:59:28.404899 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404905 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-25 00:59:28.404911 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.404917 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-25 00:59:28.404923 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.404930 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-25 00:59:28.404936 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.404942 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-25 00:59:28.404948 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.404954 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-25 00:59:28.404960 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.404966 | orchestrator | 2025-05-25 00:59:28.404972 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-25 00:59:28.404978 | orchestrator | Sunday 25 May 2025 00:48:34 +0000 (0:00:00.782) 0:01:55.464 ************ 2025-05-25 00:59:28.404984 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.404990 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.404997 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.405003 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.405009 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.405015 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.405021 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.405027 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.405034 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.405040 | orchestrator | 2025-05-25 00:59:28.405046 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-25 00:59:28.405052 | orchestrator | Sunday 25 May 2025 00:48:35 +0000 (0:00:00.842) 0:01:56.307 ************ 2025-05-25 00:59:28.405058 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.405082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.405089 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.405095 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.405102 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-25 00:59:28.405108 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-25 00:59:28.405114 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-25 00:59:28.405120 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.405131 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-25 00:59:28.405137 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-25 00:59:28.405143 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-25 00:59:28.405149 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.405155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.405161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.405167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.405174 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-25 00:59:28.405180 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-25 00:59:28.405186 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-25 00:59:28.405192 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.405198 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.405205 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-25 00:59:28.405211 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-25 00:59:28.405217 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-25 00:59:28.405223 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.405229 | orchestrator | 2025-05-25 00:59:28.405235 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-25 00:59:28.405242 | orchestrator | Sunday 25 May 2025 00:48:37 +0000 (0:00:01.487) 0:01:57.794 ************ 2025-05-25 00:59:28.405248 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.405254 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.405260 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.405266 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.405272 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.405278 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.405284 | orchestrator | 2025-05-25 00:59:28.405291 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-25 00:59:28.405297 | orchestrator | Sunday 25 May 2025 00:48:38 +0000 (0:00:01.245) 0:01:59.039 ************ 2025-05-25 00:59:28.405303 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.405309 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.405319 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.405326 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 00:59:28.405332 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.405338 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-25 00:59:28.405344 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.405350 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-25 00:59:28.405357 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.405363 | orchestrator | 2025-05-25 00:59:28.405369 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-25 00:59:28.405375 | orchestrator | Sunday 25 May 2025 00:48:39 +0000 (0:00:01.347) 0:02:00.387 ************ 2025-05-25 00:59:28.405381 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.405391 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.405398 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.405404 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.405410 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.405416 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.405422 | orchestrator | 2025-05-25 00:59:28.405429 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-25 00:59:28.405435 | orchestrator | Sunday 25 May 2025 00:48:41 +0000 (0:00:01.339) 0:02:01.726 ************ 2025-05-25 00:59:28.405441 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.405447 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.405453 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.405459 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.405470 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.405476 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.405482 | orchestrator | 2025-05-25 00:59:28.405488 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-25 00:59:28.405495 | orchestrator | Sunday 25 May 2025 00:48:42 +0000 (0:00:01.319) 0:02:03.046 ************ 2025-05-25 00:59:28.405501 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.405507 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.405513 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.405519 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.405525 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.405531 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.405537 | orchestrator | 2025-05-25 00:59:28.405544 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-25 00:59:28.405550 | orchestrator | Sunday 25 May 2025 00:48:43 +0000 (0:00:01.394) 0:02:04.441 ************ 2025-05-25 00:59:28.405556 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.405562 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.405568 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.405574 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.405581 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.405591 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.405601 | orchestrator | 2025-05-25 00:59:28.405613 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-25 00:59:28.405623 | orchestrator | Sunday 25 May 2025 00:48:45 +0000 (0:00:02.099) 0:02:06.541 ************ 2025-05-25 00:59:28.405634 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.405645 | orchestrator | 2025-05-25 00:59:28.405655 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-25 00:59:28.405664 | orchestrator | Sunday 25 May 2025 00:48:47 +0000 (0:00:01.255) 0:02:07.796 ************ 2025-05-25 00:59:28.405673 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.405682 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.405692 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.405702 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.405713 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.405723 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.405733 | orchestrator | 2025-05-25 00:59:28.405742 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-25 00:59:28.405753 | orchestrator | Sunday 25 May 2025 00:48:47 +0000 (0:00:00.753) 0:02:08.549 ************ 2025-05-25 00:59:28.405764 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.405773 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.405783 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.405790 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.405796 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.405802 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.405808 | orchestrator | 2025-05-25 00:59:28.405815 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-25 00:59:28.405821 | orchestrator | Sunday 25 May 2025 00:48:48 +0000 (0:00:00.956) 0:02:09.506 ************ 2025-05-25 00:59:28.405827 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-25 00:59:28.405833 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-25 00:59:28.405839 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-25 00:59:28.405845 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-25 00:59:28.405851 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-25 00:59:28.405857 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-25 00:59:28.405870 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-25 00:59:28.405877 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-25 00:59:28.405883 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-25 00:59:28.405931 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-25 00:59:28.405944 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-25 00:59:28.405950 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-25 00:59:28.405956 | orchestrator | 2025-05-25 00:59:28.405962 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-25 00:59:28.405969 | orchestrator | Sunday 25 May 2025 00:48:50 +0000 (0:00:01.424) 0:02:10.930 ************ 2025-05-25 00:59:28.405975 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.405981 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.405987 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.405993 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.405999 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.406006 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.406012 | orchestrator | 2025-05-25 00:59:28.406181 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-25 00:59:28.406190 | orchestrator | Sunday 25 May 2025 00:48:52 +0000 (0:00:01.718) 0:02:12.649 ************ 2025-05-25 00:59:28.406197 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.406203 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.406209 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.406215 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.406222 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.406228 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.406234 | orchestrator | 2025-05-25 00:59:28.406240 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-25 00:59:28.406247 | orchestrator | Sunday 25 May 2025 00:48:52 +0000 (0:00:00.701) 0:02:13.351 ************ 2025-05-25 00:59:28.406253 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.406259 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.406265 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.406271 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.406277 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.406284 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.406290 | orchestrator | 2025-05-25 00:59:28.406296 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-25 00:59:28.406302 | orchestrator | Sunday 25 May 2025 00:48:53 +0000 (0:00:00.809) 0:02:14.160 ************ 2025-05-25 00:59:28.406309 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.406315 | orchestrator | 2025-05-25 00:59:28.406322 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-25 00:59:28.406328 | orchestrator | Sunday 25 May 2025 00:48:54 +0000 (0:00:01.229) 0:02:15.389 ************ 2025-05-25 00:59:28.406334 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.406340 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.406347 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.406353 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.406359 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.406365 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.406372 | orchestrator | 2025-05-25 00:59:28.406378 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-25 00:59:28.406384 | orchestrator | Sunday 25 May 2025 00:49:47 +0000 (0:00:52.260) 0:03:07.650 ************ 2025-05-25 00:59:28.406391 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-25 00:59:28.406403 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-25 00:59:28.406409 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-25 00:59:28.406416 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.406422 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-25 00:59:28.406428 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-25 00:59:28.406434 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-25 00:59:28.406440 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.406446 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-25 00:59:28.406452 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-25 00:59:28.406457 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-25 00:59:28.406462 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.406468 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-25 00:59:28.406473 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-25 00:59:28.406479 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-25 00:59:28.406484 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.406490 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-25 00:59:28.406495 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-25 00:59:28.406500 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-25 00:59:28.406506 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.406511 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-25 00:59:28.406517 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-25 00:59:28.406522 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-25 00:59:28.406527 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.406533 | orchestrator | 2025-05-25 00:59:28.406538 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-25 00:59:28.406544 | orchestrator | Sunday 25 May 2025 00:49:47 +0000 (0:00:00.683) 0:03:08.334 ************ 2025-05-25 00:59:28.406549 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.406598 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.406606 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.406611 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.406617 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.406622 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.406627 | orchestrator | 2025-05-25 00:59:28.406633 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-25 00:59:28.406638 | orchestrator | Sunday 25 May 2025 00:49:48 +0000 (0:00:00.538) 0:03:08.872 ************ 2025-05-25 00:59:28.406644 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.406667 | orchestrator | 2025-05-25 00:59:28.406673 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-25 00:59:28.406682 | orchestrator | Sunday 25 May 2025 00:49:48 +0000 (0:00:00.145) 0:03:09.018 ************ 2025-05-25 00:59:28.406688 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.406693 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.406699 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.406704 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.406709 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.406715 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.406720 | orchestrator | 2025-05-25 00:59:28.406725 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-25 00:59:28.406736 | orchestrator | Sunday 25 May 2025 00:49:49 +0000 (0:00:00.739) 0:03:09.757 ************ 2025-05-25 00:59:28.406741 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.406746 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.406752 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.406758 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.406767 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.406776 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.406784 | orchestrator | 2025-05-25 00:59:28.406793 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-25 00:59:28.406801 | orchestrator | Sunday 25 May 2025 00:49:49 +0000 (0:00:00.615) 0:03:10.373 ************ 2025-05-25 00:59:28.406810 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.406818 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.406827 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.406835 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.406845 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.406852 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.406861 | orchestrator | 2025-05-25 00:59:28.406870 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-25 00:59:28.406905 | orchestrator | Sunday 25 May 2025 00:49:50 +0000 (0:00:00.747) 0:03:11.120 ************ 2025-05-25 00:59:28.406915 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.406923 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.406932 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.406940 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.406946 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.406951 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.406956 | orchestrator | 2025-05-25 00:59:28.406962 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-25 00:59:28.406967 | orchestrator | Sunday 25 May 2025 00:49:52 +0000 (0:00:01.526) 0:03:12.647 ************ 2025-05-25 00:59:28.406973 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.406978 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.406983 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.406988 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.406994 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.406999 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.407004 | orchestrator | 2025-05-25 00:59:28.407010 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-25 00:59:28.407015 | orchestrator | Sunday 25 May 2025 00:49:53 +0000 (0:00:00.919) 0:03:13.566 ************ 2025-05-25 00:59:28.407021 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.407028 | orchestrator | 2025-05-25 00:59:28.407033 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-25 00:59:28.407038 | orchestrator | Sunday 25 May 2025 00:49:54 +0000 (0:00:01.293) 0:03:14.859 ************ 2025-05-25 00:59:28.407044 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.407049 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.407054 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.407060 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.407065 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.407070 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.407075 | orchestrator | 2025-05-25 00:59:28.407081 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-25 00:59:28.407086 | orchestrator | Sunday 25 May 2025 00:49:54 +0000 (0:00:00.640) 0:03:15.499 ************ 2025-05-25 00:59:28.407091 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.407097 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.407102 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.407107 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.407113 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.407127 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.407132 | orchestrator | 2025-05-25 00:59:28.407138 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-25 00:59:28.407143 | orchestrator | Sunday 25 May 2025 00:49:55 +0000 (0:00:00.872) 0:03:16.372 ************ 2025-05-25 00:59:28.407149 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.407154 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.407159 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.407164 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.407170 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.407175 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.407180 | orchestrator | 2025-05-25 00:59:28.407186 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-25 00:59:28.407191 | orchestrator | Sunday 25 May 2025 00:49:56 +0000 (0:00:00.619) 0:03:16.991 ************ 2025-05-25 00:59:28.407197 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.407202 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.407207 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.407213 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.407273 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.407281 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.407288 | orchestrator | 2025-05-25 00:59:28.407294 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-25 00:59:28.407300 | orchestrator | Sunday 25 May 2025 00:49:57 +0000 (0:00:01.051) 0:03:18.043 ************ 2025-05-25 00:59:28.407307 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.407313 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.407319 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.407326 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.407332 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.407338 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.407344 | orchestrator | 2025-05-25 00:59:28.407354 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-25 00:59:28.407360 | orchestrator | Sunday 25 May 2025 00:49:58 +0000 (0:00:00.607) 0:03:18.650 ************ 2025-05-25 00:59:28.407366 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.407372 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.407379 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.407385 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.407391 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.407397 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.407403 | orchestrator | 2025-05-25 00:59:28.407410 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-25 00:59:28.407416 | orchestrator | Sunday 25 May 2025 00:49:59 +0000 (0:00:00.912) 0:03:19.563 ************ 2025-05-25 00:59:28.407423 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.407429 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.407435 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.407441 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.407447 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.407453 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.407459 | orchestrator | 2025-05-25 00:59:28.407465 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-25 00:59:28.407472 | orchestrator | Sunday 25 May 2025 00:49:59 +0000 (0:00:00.735) 0:03:20.298 ************ 2025-05-25 00:59:28.407478 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.407484 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.407490 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.407496 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.407503 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.407509 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.407515 | orchestrator | 2025-05-25 00:59:28.407521 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-25 00:59:28.407543 | orchestrator | Sunday 25 May 2025 00:50:01 +0000 (0:00:01.501) 0:03:21.800 ************ 2025-05-25 00:59:28.407550 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.407556 | orchestrator | 2025-05-25 00:59:28.407563 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-25 00:59:28.407569 | orchestrator | Sunday 25 May 2025 00:50:02 +0000 (0:00:01.220) 0:03:23.020 ************ 2025-05-25 00:59:28.407575 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-25 00:59:28.407582 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-25 00:59:28.407589 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-25 00:59:28.407594 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-25 00:59:28.407600 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-25 00:59:28.407605 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-25 00:59:28.407610 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-25 00:59:28.407616 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-25 00:59:28.407621 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-25 00:59:28.407627 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-25 00:59:28.407632 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-25 00:59:28.407637 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-25 00:59:28.407643 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-25 00:59:28.407648 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-25 00:59:28.407654 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-25 00:59:28.407659 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-25 00:59:28.407665 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-25 00:59:28.407670 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-25 00:59:28.407675 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-25 00:59:28.407681 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-25 00:59:28.407686 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-25 00:59:28.407691 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-25 00:59:28.407697 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-25 00:59:28.407702 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-25 00:59:28.407707 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-25 00:59:28.407713 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-25 00:59:28.407718 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-25 00:59:28.407723 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-25 00:59:28.407729 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-25 00:59:28.407734 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-25 00:59:28.407740 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-25 00:59:28.407785 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-25 00:59:28.407792 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-25 00:59:28.407798 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-25 00:59:28.407803 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-25 00:59:28.407809 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-25 00:59:28.407814 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-25 00:59:28.407820 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-25 00:59:28.407837 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-25 00:59:28.407856 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-25 00:59:28.407866 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-25 00:59:28.407875 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-25 00:59:28.407883 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-25 00:59:28.407907 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-25 00:59:28.407915 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-25 00:59:28.407923 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-25 00:59:28.407932 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-25 00:59:28.407940 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-25 00:59:28.407949 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-25 00:59:28.407958 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-25 00:59:28.407966 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-25 00:59:28.407974 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-25 00:59:28.407979 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-25 00:59:28.407985 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-25 00:59:28.407990 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-25 00:59:28.407995 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-25 00:59:28.408001 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-25 00:59:28.408006 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-25 00:59:28.408011 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-25 00:59:28.408017 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-25 00:59:28.408022 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-25 00:59:28.408027 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-25 00:59:28.408033 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-25 00:59:28.408038 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-25 00:59:28.408043 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-25 00:59:28.408077 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-25 00:59:28.408082 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-25 00:59:28.408088 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-25 00:59:28.408093 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-25 00:59:28.408098 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-25 00:59:28.408104 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-25 00:59:28.408109 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-25 00:59:28.408115 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-25 00:59:28.408120 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-25 00:59:28.408125 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-25 00:59:28.408131 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-25 00:59:28.408136 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-25 00:59:28.408147 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-25 00:59:28.408153 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-25 00:59:28.408158 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-25 00:59:28.408164 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-25 00:59:28.408169 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-25 00:59:28.408174 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-25 00:59:28.408180 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-25 00:59:28.408185 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-25 00:59:28.408190 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-25 00:59:28.408196 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-25 00:59:28.408201 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-25 00:59:28.408207 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-25 00:59:28.408262 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-25 00:59:28.408270 | orchestrator | 2025-05-25 00:59:28.408275 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-25 00:59:28.408281 | orchestrator | Sunday 25 May 2025 00:50:08 +0000 (0:00:05.707) 0:03:28.728 ************ 2025-05-25 00:59:28.408286 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.408292 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.408297 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.408303 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.408309 | orchestrator | 2025-05-25 00:59:28.408318 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-25 00:59:28.408323 | orchestrator | Sunday 25 May 2025 00:50:09 +0000 (0:00:01.362) 0:03:30.090 ************ 2025-05-25 00:59:28.408329 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-25 00:59:28.408334 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-25 00:59:28.408340 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-25 00:59:28.408345 | orchestrator | 2025-05-25 00:59:28.408350 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-25 00:59:28.408356 | orchestrator | Sunday 25 May 2025 00:50:11 +0000 (0:00:01.478) 0:03:31.569 ************ 2025-05-25 00:59:28.408361 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-25 00:59:28.408367 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-25 00:59:28.408372 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-25 00:59:28.408378 | orchestrator | 2025-05-25 00:59:28.408383 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-25 00:59:28.408388 | orchestrator | Sunday 25 May 2025 00:50:12 +0000 (0:00:01.243) 0:03:32.812 ************ 2025-05-25 00:59:28.408394 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.408399 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.408404 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.408410 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.408415 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.408421 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.408426 | orchestrator | 2025-05-25 00:59:28.408431 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-25 00:59:28.408441 | orchestrator | Sunday 25 May 2025 00:50:13 +0000 (0:00:00.825) 0:03:33.638 ************ 2025-05-25 00:59:28.408446 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.408452 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.408457 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.408462 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.408468 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.408473 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.408478 | orchestrator | 2025-05-25 00:59:28.408484 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-25 00:59:28.408489 | orchestrator | Sunday 25 May 2025 00:50:13 +0000 (0:00:00.728) 0:03:34.366 ************ 2025-05-25 00:59:28.408494 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.408500 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.408505 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.408511 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.408516 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.408529 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.408535 | orchestrator | 2025-05-25 00:59:28.408541 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-25 00:59:28.408546 | orchestrator | Sunday 25 May 2025 00:50:14 +0000 (0:00:00.853) 0:03:35.220 ************ 2025-05-25 00:59:28.408552 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.408557 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.408562 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.408568 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.408573 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.408578 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.408584 | orchestrator | 2025-05-25 00:59:28.408589 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-25 00:59:28.408595 | orchestrator | Sunday 25 May 2025 00:50:15 +0000 (0:00:00.646) 0:03:35.866 ************ 2025-05-25 00:59:28.408600 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.408605 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.408611 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.408616 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.408621 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.408626 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.408632 | orchestrator | 2025-05-25 00:59:28.408637 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-25 00:59:28.408643 | orchestrator | Sunday 25 May 2025 00:50:16 +0000 (0:00:00.831) 0:03:36.698 ************ 2025-05-25 00:59:28.408648 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.408653 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.408659 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.408664 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.408669 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.408675 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.408680 | orchestrator | 2025-05-25 00:59:28.408686 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-25 00:59:28.408729 | orchestrator | Sunday 25 May 2025 00:50:16 +0000 (0:00:00.634) 0:03:37.333 ************ 2025-05-25 00:59:28.408736 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.408742 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.408747 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.408753 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.408758 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.408763 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.408769 | orchestrator | 2025-05-25 00:59:28.408774 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-25 00:59:28.408780 | orchestrator | Sunday 25 May 2025 00:50:17 +0000 (0:00:00.848) 0:03:38.182 ************ 2025-05-25 00:59:28.408788 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.408798 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.408804 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.408809 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.408814 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.408820 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.408825 | orchestrator | 2025-05-25 00:59:28.408830 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-25 00:59:28.408836 | orchestrator | Sunday 25 May 2025 00:50:18 +0000 (0:00:00.637) 0:03:38.820 ************ 2025-05-25 00:59:28.408841 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.408847 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.408852 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.408857 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.408863 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.408868 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.408873 | orchestrator | 2025-05-25 00:59:28.408879 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-25 00:59:28.408896 | orchestrator | Sunday 25 May 2025 00:50:20 +0000 (0:00:02.175) 0:03:40.995 ************ 2025-05-25 00:59:28.408902 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.408907 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.408912 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.408918 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.408923 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.408928 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.408934 | orchestrator | 2025-05-25 00:59:28.408939 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-25 00:59:28.408945 | orchestrator | Sunday 25 May 2025 00:50:21 +0000 (0:00:00.802) 0:03:41.798 ************ 2025-05-25 00:59:28.408951 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-25 00:59:28.408956 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-25 00:59:28.408961 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.408967 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-25 00:59:28.408972 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-25 00:59:28.408977 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.408983 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-25 00:59:28.408988 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-25 00:59:28.408993 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.408999 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-25 00:59:28.409013 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-25 00:59:28.409018 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.409023 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-25 00:59:28.409029 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-25 00:59:28.409034 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.409040 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-25 00:59:28.409045 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-25 00:59:28.409050 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.409055 | orchestrator | 2025-05-25 00:59:28.409061 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-25 00:59:28.409066 | orchestrator | Sunday 25 May 2025 00:50:22 +0000 (0:00:01.012) 0:03:42.811 ************ 2025-05-25 00:59:28.409072 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-25 00:59:28.409077 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-25 00:59:28.409082 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409088 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-25 00:59:28.409093 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-25 00:59:28.409098 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.409104 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-25 00:59:28.409113 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-25 00:59:28.409119 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.409124 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-25 00:59:28.409129 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-25 00:59:28.409135 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-25 00:59:28.409140 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-25 00:59:28.409146 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-25 00:59:28.409151 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-25 00:59:28.409156 | orchestrator | 2025-05-25 00:59:28.409162 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-25 00:59:28.409167 | orchestrator | Sunday 25 May 2025 00:50:23 +0000 (0:00:00.873) 0:03:43.684 ************ 2025-05-25 00:59:28.409172 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409178 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.409183 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.409188 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.409194 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.409199 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.409205 | orchestrator | 2025-05-25 00:59:28.409210 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-25 00:59:28.409215 | orchestrator | Sunday 25 May 2025 00:50:24 +0000 (0:00:00.925) 0:03:44.610 ************ 2025-05-25 00:59:28.409221 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409265 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.409273 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.409279 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.409284 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.409290 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.409295 | orchestrator | 2025-05-25 00:59:28.409300 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-25 00:59:28.409306 | orchestrator | Sunday 25 May 2025 00:50:24 +0000 (0:00:00.581) 0:03:45.191 ************ 2025-05-25 00:59:28.409311 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409317 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.409322 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.409331 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.409336 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.409342 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.409347 | orchestrator | 2025-05-25 00:59:28.409352 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-25 00:59:28.409358 | orchestrator | Sunday 25 May 2025 00:50:25 +0000 (0:00:00.807) 0:03:45.998 ************ 2025-05-25 00:59:28.409363 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409369 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.409374 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.409379 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.409385 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.409390 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.409395 | orchestrator | 2025-05-25 00:59:28.409401 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-25 00:59:28.409406 | orchestrator | Sunday 25 May 2025 00:50:26 +0000 (0:00:00.651) 0:03:46.650 ************ 2025-05-25 00:59:28.409412 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409417 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.409422 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.409428 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.409433 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.409438 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.409444 | orchestrator | 2025-05-25 00:59:28.409449 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-25 00:59:28.409459 | orchestrator | Sunday 25 May 2025 00:50:26 +0000 (0:00:00.751) 0:03:47.402 ************ 2025-05-25 00:59:28.409464 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409469 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.409475 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.409480 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.409485 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.409491 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.409496 | orchestrator | 2025-05-25 00:59:28.409502 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-25 00:59:28.409507 | orchestrator | Sunday 25 May 2025 00:50:27 +0000 (0:00:00.701) 0:03:48.104 ************ 2025-05-25 00:59:28.409512 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.409518 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.409531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.409536 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409542 | orchestrator | 2025-05-25 00:59:28.409547 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-25 00:59:28.409553 | orchestrator | Sunday 25 May 2025 00:50:28 +0000 (0:00:00.524) 0:03:48.628 ************ 2025-05-25 00:59:28.409558 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.409563 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.409569 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.409574 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409580 | orchestrator | 2025-05-25 00:59:28.409585 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-25 00:59:28.409590 | orchestrator | Sunday 25 May 2025 00:50:28 +0000 (0:00:00.843) 0:03:49.472 ************ 2025-05-25 00:59:28.409596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.409601 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.409606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.409612 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409617 | orchestrator | 2025-05-25 00:59:28.409622 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.409628 | orchestrator | Sunday 25 May 2025 00:50:29 +0000 (0:00:00.477) 0:03:49.949 ************ 2025-05-25 00:59:28.409633 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409639 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.409644 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.409649 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.409655 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.409660 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.409665 | orchestrator | 2025-05-25 00:59:28.409671 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-25 00:59:28.409676 | orchestrator | Sunday 25 May 2025 00:50:30 +0000 (0:00:00.728) 0:03:50.678 ************ 2025-05-25 00:59:28.409682 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-25 00:59:28.409687 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409692 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-25 00:59:28.409697 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.409703 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-25 00:59:28.409708 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.409713 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-25 00:59:28.409719 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-25 00:59:28.409724 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-25 00:59:28.409730 | orchestrator | 2025-05-25 00:59:28.409735 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-25 00:59:28.409741 | orchestrator | Sunday 25 May 2025 00:50:31 +0000 (0:00:01.278) 0:03:51.956 ************ 2025-05-25 00:59:28.409751 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409794 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.409802 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.409807 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.409812 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.409818 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.409823 | orchestrator | 2025-05-25 00:59:28.409828 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.409834 | orchestrator | Sunday 25 May 2025 00:50:32 +0000 (0:00:00.767) 0:03:52.723 ************ 2025-05-25 00:59:28.409839 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409844 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.409850 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.409855 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.409864 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.409869 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.409875 | orchestrator | 2025-05-25 00:59:28.409880 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-25 00:59:28.409922 | orchestrator | Sunday 25 May 2025 00:50:33 +0000 (0:00:01.033) 0:03:53.757 ************ 2025-05-25 00:59:28.409928 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-25 00:59:28.409934 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.409939 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-25 00:59:28.409944 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.409950 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-25 00:59:28.409955 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.409960 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-25 00:59:28.409966 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.409971 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-25 00:59:28.409976 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.409982 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-25 00:59:28.409987 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.409992 | orchestrator | 2025-05-25 00:59:28.409998 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-25 00:59:28.410003 | orchestrator | Sunday 25 May 2025 00:50:34 +0000 (0:00:01.020) 0:03:54.777 ************ 2025-05-25 00:59:28.410009 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.410031 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.410037 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.410042 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.410055 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.410060 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.410065 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.410069 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.410074 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.410079 | orchestrator | 2025-05-25 00:59:28.410084 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-25 00:59:28.410089 | orchestrator | Sunday 25 May 2025 00:50:34 +0000 (0:00:00.732) 0:03:55.510 ************ 2025-05-25 00:59:28.410093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.410098 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.410103 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.410108 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.410113 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-25 00:59:28.410125 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-25 00:59:28.410130 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-25 00:59:28.410135 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.410139 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-25 00:59:28.410144 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-25 00:59:28.410149 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-25 00:59:28.410154 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.410158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.410163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.410168 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-25 00:59:28.410173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.410177 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.410182 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-25 00:59:28.410187 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-25 00:59:28.410192 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-25 00:59:28.410196 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.410201 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-25 00:59:28.410206 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-25 00:59:28.410211 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.410215 | orchestrator | 2025-05-25 00:59:28.410220 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-25 00:59:28.410225 | orchestrator | Sunday 25 May 2025 00:50:36 +0000 (0:00:01.338) 0:03:56.848 ************ 2025-05-25 00:59:28.410230 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.410235 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.410239 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.410244 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.410249 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.410254 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.410258 | orchestrator | 2025-05-25 00:59:28.410302 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-25 00:59:28.410310 | orchestrator | Sunday 25 May 2025 00:50:40 +0000 (0:00:04.133) 0:04:00.982 ************ 2025-05-25 00:59:28.410315 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.410319 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.410324 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.410329 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.410334 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.410338 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.410343 | orchestrator | 2025-05-25 00:59:28.410348 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-25 00:59:28.410353 | orchestrator | Sunday 25 May 2025 00:50:41 +0000 (0:00:00.938) 0:04:01.921 ************ 2025-05-25 00:59:28.410358 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.410366 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.410371 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.410376 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:28.410381 | orchestrator | 2025-05-25 00:59:28.410386 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-25 00:59:28.410390 | orchestrator | Sunday 25 May 2025 00:50:42 +0000 (0:00:00.954) 0:04:02.875 ************ 2025-05-25 00:59:28.410395 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.410400 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.410405 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.410409 | orchestrator | 2025-05-25 00:59:28.410414 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-25 00:59:28.410423 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.410428 | orchestrator | 2025-05-25 00:59:28.410433 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-25 00:59:28.410438 | orchestrator | Sunday 25 May 2025 00:50:43 +0000 (0:00:00.974) 0:04:03.849 ************ 2025-05-25 00:59:28.410443 | orchestrator | 2025-05-25 00:59:28.410447 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-25 00:59:28.410452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.410457 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.410462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.410467 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.410471 | orchestrator | 2025-05-25 00:59:28.410476 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-25 00:59:28.410481 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.410486 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.410491 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.410495 | orchestrator | 2025-05-25 00:59:28.410500 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-25 00:59:28.410505 | orchestrator | Sunday 25 May 2025 00:50:44 +0000 (0:00:01.441) 0:04:05.291 ************ 2025-05-25 00:59:28.410510 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 00:59:28.410514 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 00:59:28.410519 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 00:59:28.410524 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.410529 | orchestrator | 2025-05-25 00:59:28.410533 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-25 00:59:28.410538 | orchestrator | Sunday 25 May 2025 00:50:45 +0000 (0:00:00.923) 0:04:06.215 ************ 2025-05-25 00:59:28.410543 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.410555 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.410560 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.410565 | orchestrator | 2025-05-25 00:59:28.410570 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-25 00:59:28.410575 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.410580 | orchestrator | 2025-05-25 00:59:28.410585 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-25 00:59:28.410589 | orchestrator | Sunday 25 May 2025 00:50:46 +0000 (0:00:00.760) 0:04:06.975 ************ 2025-05-25 00:59:28.410594 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.410599 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.410604 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.410609 | orchestrator | 2025-05-25 00:59:28.410613 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-25 00:59:28.410618 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.410623 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.410628 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.410632 | orchestrator | 2025-05-25 00:59:28.410637 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-25 00:59:28.410642 | orchestrator | Sunday 25 May 2025 00:50:47 +0000 (0:00:00.667) 0:04:07.642 ************ 2025-05-25 00:59:28.410647 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.410651 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.410656 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.410661 | orchestrator | 2025-05-25 00:59:28.410666 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-25 00:59:28.410670 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.410675 | orchestrator | 2025-05-25 00:59:28.410680 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-25 00:59:28.410685 | orchestrator | Sunday 25 May 2025 00:50:47 +0000 (0:00:00.533) 0:04:08.176 ************ 2025-05-25 00:59:28.410693 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.410698 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.410703 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.410708 | orchestrator | 2025-05-25 00:59:28.410713 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-25 00:59:28.410717 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.410722 | orchestrator | 2025-05-25 00:59:28.410727 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-25 00:59:28.410732 | orchestrator | Sunday 25 May 2025 00:50:48 +0000 (0:00:01.106) 0:04:09.282 ************ 2025-05-25 00:59:28.410737 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.410742 | orchestrator | 2025-05-25 00:59:28.410779 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-25 00:59:28.410786 | orchestrator | Sunday 25 May 2025 00:50:48 +0000 (0:00:00.124) 0:04:09.407 ************ 2025-05-25 00:59:28.410791 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.410796 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.410801 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.410805 | orchestrator | 2025-05-25 00:59:28.410810 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-25 00:59:28.410815 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.410819 | orchestrator | 2025-05-25 00:59:28.410824 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-25 00:59:28.410833 | orchestrator | Sunday 25 May 2025 00:50:49 +0000 (0:00:00.534) 0:04:09.941 ************ 2025-05-25 00:59:28.410837 | orchestrator | 2025-05-25 00:59:28.410842 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-25 00:59:28.410847 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.410852 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:28.410856 | orchestrator | 2025-05-25 00:59:28.410861 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-25 00:59:28.410866 | orchestrator | Sunday 25 May 2025 00:50:50 +0000 (0:00:01.093) 0:04:11.034 ************ 2025-05-25 00:59:28.410871 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.410876 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.410880 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.410899 | orchestrator | 2025-05-25 00:59:28.410904 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-25 00:59:28.410909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.410914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.410918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.410923 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.410928 | orchestrator | 2025-05-25 00:59:28.410932 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-25 00:59:28.410937 | orchestrator | Sunday 25 May 2025 00:50:51 +0000 (0:00:00.973) 0:04:12.008 ************ 2025-05-25 00:59:28.410942 | orchestrator | 2025-05-25 00:59:28.410947 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-25 00:59:28.410951 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.410956 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.410961 | orchestrator | 2025-05-25 00:59:28.410966 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-25 00:59:28.410971 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.410975 | orchestrator | 2025-05-25 00:59:28.410980 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-25 00:59:28.410985 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.410990 | orchestrator | 2025-05-25 00:59:28.410994 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-25 00:59:28.410999 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.411008 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.411013 | orchestrator | 2025-05-25 00:59:28.411017 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-25 00:59:28.411022 | orchestrator | Sunday 25 May 2025 00:50:53 +0000 (0:00:01.555) 0:04:13.563 ************ 2025-05-25 00:59:28.411027 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 00:59:28.411032 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 00:59:28.411036 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 00:59:28.411041 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.411046 | orchestrator | 2025-05-25 00:59:28.411051 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-25 00:59:28.411056 | orchestrator | Sunday 25 May 2025 00:50:53 +0000 (0:00:00.706) 0:04:14.270 ************ 2025-05-25 00:59:28.411060 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.411065 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.411070 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.411075 | orchestrator | 2025-05-25 00:59:28.411087 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-25 00:59:28.411092 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.411097 | orchestrator | 2025-05-25 00:59:28.411102 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-25 00:59:28.411107 | orchestrator | Sunday 25 May 2025 00:50:54 +0000 (0:00:01.072) 0:04:15.342 ************ 2025-05-25 00:59:28.411111 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.411116 | orchestrator | 2025-05-25 00:59:28.411121 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-25 00:59:28.411126 | orchestrator | Sunday 25 May 2025 00:50:55 +0000 (0:00:00.719) 0:04:16.062 ************ 2025-05-25 00:59:28.411131 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.411135 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.411140 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.411145 | orchestrator | 2025-05-25 00:59:28.411150 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-25 00:59:28.411155 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.411159 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.411164 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.411169 | orchestrator | 2025-05-25 00:59:28.411180 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-25 00:59:28.411187 | orchestrator | Sunday 25 May 2025 00:50:56 +0000 (0:00:01.075) 0:04:17.137 ************ 2025-05-25 00:59:28.411195 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.411202 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.411214 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.411222 | orchestrator | 2025-05-25 00:59:28.411229 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-25 00:59:28.411237 | orchestrator | Sunday 25 May 2025 00:50:57 +0000 (0:00:01.168) 0:04:18.306 ************ 2025-05-25 00:59:28.411244 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.411303 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.411314 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.411322 | orchestrator | 2025-05-25 00:59:28.411329 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-25 00:59:28.411337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.411342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.411347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.411351 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.411356 | orchestrator | 2025-05-25 00:59:28.411361 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-25 00:59:28.411370 | orchestrator | Sunday 25 May 2025 00:50:59 +0000 (0:00:01.600) 0:04:19.907 ************ 2025-05-25 00:59:28.411381 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.411385 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.411390 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.411395 | orchestrator | 2025-05-25 00:59:28.411400 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-25 00:59:28.411405 | orchestrator | Sunday 25 May 2025 00:51:00 +0000 (0:00:01.313) 0:04:21.220 ************ 2025-05-25 00:59:28.411410 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.411414 | orchestrator | 2025-05-25 00:59:28.411419 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-25 00:59:28.411424 | orchestrator | Sunday 25 May 2025 00:51:01 +0000 (0:00:00.640) 0:04:21.860 ************ 2025-05-25 00:59:28.411429 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.411433 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.411438 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.411443 | orchestrator | 2025-05-25 00:59:28.411447 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-25 00:59:28.411452 | orchestrator | Sunday 25 May 2025 00:51:01 +0000 (0:00:00.359) 0:04:22.220 ************ 2025-05-25 00:59:28.411457 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.411462 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.411467 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.411471 | orchestrator | 2025-05-25 00:59:28.411476 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-25 00:59:28.411481 | orchestrator | Sunday 25 May 2025 00:51:03 +0000 (0:00:01.519) 0:04:23.740 ************ 2025-05-25 00:59:28.411486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.411490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.411495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.411500 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.411504 | orchestrator | 2025-05-25 00:59:28.411509 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-25 00:59:28.411514 | orchestrator | Sunday 25 May 2025 00:51:04 +0000 (0:00:00.849) 0:04:24.590 ************ 2025-05-25 00:59:28.411519 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.411524 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.411528 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.411533 | orchestrator | 2025-05-25 00:59:28.411538 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-25 00:59:28.411543 | orchestrator | Sunday 25 May 2025 00:51:04 +0000 (0:00:00.522) 0:04:25.113 ************ 2025-05-25 00:59:28.411547 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.411552 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.411557 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.411561 | orchestrator | 2025-05-25 00:59:28.411566 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-25 00:59:28.411571 | orchestrator | Sunday 25 May 2025 00:51:05 +0000 (0:00:00.439) 0:04:25.552 ************ 2025-05-25 00:59:28.411576 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.411581 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.411585 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.411590 | orchestrator | 2025-05-25 00:59:28.411595 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-25 00:59:28.411599 | orchestrator | Sunday 25 May 2025 00:51:05 +0000 (0:00:00.651) 0:04:26.204 ************ 2025-05-25 00:59:28.411604 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.411609 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.411614 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.411618 | orchestrator | 2025-05-25 00:59:28.411623 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-25 00:59:28.411637 | orchestrator | Sunday 25 May 2025 00:51:06 +0000 (0:00:00.348) 0:04:26.552 ************ 2025-05-25 00:59:28.411646 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.411651 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.411656 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.411660 | orchestrator | 2025-05-25 00:59:28.411665 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-25 00:59:28.411670 | orchestrator | 2025-05-25 00:59:28.411675 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-25 00:59:28.411680 | orchestrator | Sunday 25 May 2025 00:51:08 +0000 (0:00:02.301) 0:04:28.854 ************ 2025-05-25 00:59:28.411684 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:28.411689 | orchestrator | 2025-05-25 00:59:28.411694 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-25 00:59:28.411699 | orchestrator | Sunday 25 May 2025 00:51:09 +0000 (0:00:00.919) 0:04:29.774 ************ 2025-05-25 00:59:28.411703 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.411708 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.411713 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.411718 | orchestrator | 2025-05-25 00:59:28.411722 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-25 00:59:28.411727 | orchestrator | Sunday 25 May 2025 00:51:10 +0000 (0:00:00.905) 0:04:30.680 ************ 2025-05-25 00:59:28.411732 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.411737 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.411777 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.411784 | orchestrator | 2025-05-25 00:59:28.411789 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-25 00:59:28.411794 | orchestrator | Sunday 25 May 2025 00:51:10 +0000 (0:00:00.340) 0:04:31.020 ************ 2025-05-25 00:59:28.411799 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.411803 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.411808 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.411813 | orchestrator | 2025-05-25 00:59:28.411818 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-25 00:59:28.411823 | orchestrator | Sunday 25 May 2025 00:51:10 +0000 (0:00:00.439) 0:04:31.460 ************ 2025-05-25 00:59:28.411827 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.411836 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.411840 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.411845 | orchestrator | 2025-05-25 00:59:28.411850 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-25 00:59:28.411855 | orchestrator | Sunday 25 May 2025 00:51:11 +0000 (0:00:00.293) 0:04:31.753 ************ 2025-05-25 00:59:28.411860 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.411864 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.411869 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.411874 | orchestrator | 2025-05-25 00:59:28.411878 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-25 00:59:28.411883 | orchestrator | Sunday 25 May 2025 00:51:11 +0000 (0:00:00.737) 0:04:32.490 ************ 2025-05-25 00:59:28.411926 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.411931 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.411936 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.411941 | orchestrator | 2025-05-25 00:59:28.411946 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-25 00:59:28.411950 | orchestrator | Sunday 25 May 2025 00:51:12 +0000 (0:00:00.306) 0:04:32.797 ************ 2025-05-25 00:59:28.411955 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.411960 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.411965 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.411970 | orchestrator | 2025-05-25 00:59:28.411975 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-25 00:59:28.411980 | orchestrator | Sunday 25 May 2025 00:51:12 +0000 (0:00:00.434) 0:04:33.232 ************ 2025-05-25 00:59:28.411988 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.411993 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.411997 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412002 | orchestrator | 2025-05-25 00:59:28.412007 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-25 00:59:28.412011 | orchestrator | Sunday 25 May 2025 00:51:12 +0000 (0:00:00.313) 0:04:33.545 ************ 2025-05-25 00:59:28.412016 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412020 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412025 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412029 | orchestrator | 2025-05-25 00:59:28.412034 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-25 00:59:28.412038 | orchestrator | Sunday 25 May 2025 00:51:13 +0000 (0:00:00.304) 0:04:33.850 ************ 2025-05-25 00:59:28.412043 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412047 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412060 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412064 | orchestrator | 2025-05-25 00:59:28.412069 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-25 00:59:28.412073 | orchestrator | Sunday 25 May 2025 00:51:13 +0000 (0:00:00.338) 0:04:34.189 ************ 2025-05-25 00:59:28.412078 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.412082 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.412087 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.412092 | orchestrator | 2025-05-25 00:59:28.412096 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-25 00:59:28.412101 | orchestrator | Sunday 25 May 2025 00:51:14 +0000 (0:00:00.901) 0:04:35.090 ************ 2025-05-25 00:59:28.412105 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412110 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412115 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412119 | orchestrator | 2025-05-25 00:59:28.412124 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-25 00:59:28.412128 | orchestrator | Sunday 25 May 2025 00:51:14 +0000 (0:00:00.287) 0:04:35.378 ************ 2025-05-25 00:59:28.412133 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.412137 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.412142 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.412147 | orchestrator | 2025-05-25 00:59:28.412151 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-25 00:59:28.412156 | orchestrator | Sunday 25 May 2025 00:51:15 +0000 (0:00:00.319) 0:04:35.698 ************ 2025-05-25 00:59:28.412160 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412165 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412169 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412174 | orchestrator | 2025-05-25 00:59:28.412178 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-25 00:59:28.412183 | orchestrator | Sunday 25 May 2025 00:51:15 +0000 (0:00:00.334) 0:04:36.032 ************ 2025-05-25 00:59:28.412187 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412192 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412196 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412201 | orchestrator | 2025-05-25 00:59:28.412206 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-25 00:59:28.412210 | orchestrator | Sunday 25 May 2025 00:51:15 +0000 (0:00:00.468) 0:04:36.500 ************ 2025-05-25 00:59:28.412215 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412219 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412224 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412228 | orchestrator | 2025-05-25 00:59:28.412233 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-25 00:59:28.412237 | orchestrator | Sunday 25 May 2025 00:51:16 +0000 (0:00:00.296) 0:04:36.796 ************ 2025-05-25 00:59:28.412246 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412251 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412292 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412299 | orchestrator | 2025-05-25 00:59:28.412304 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-25 00:59:28.412308 | orchestrator | Sunday 25 May 2025 00:51:16 +0000 (0:00:00.287) 0:04:37.084 ************ 2025-05-25 00:59:28.412313 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412318 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412326 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412333 | orchestrator | 2025-05-25 00:59:28.412340 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-25 00:59:28.412348 | orchestrator | Sunday 25 May 2025 00:51:16 +0000 (0:00:00.330) 0:04:37.414 ************ 2025-05-25 00:59:28.412355 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.412366 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.412373 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.412381 | orchestrator | 2025-05-25 00:59:28.412388 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-25 00:59:28.412395 | orchestrator | Sunday 25 May 2025 00:51:17 +0000 (0:00:00.845) 0:04:38.260 ************ 2025-05-25 00:59:28.412402 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.412410 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.412417 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.412424 | orchestrator | 2025-05-25 00:59:28.412430 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-25 00:59:28.412435 | orchestrator | Sunday 25 May 2025 00:51:18 +0000 (0:00:00.386) 0:04:38.647 ************ 2025-05-25 00:59:28.412440 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412444 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412449 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412453 | orchestrator | 2025-05-25 00:59:28.412458 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-25 00:59:28.412462 | orchestrator | Sunday 25 May 2025 00:51:18 +0000 (0:00:00.327) 0:04:38.975 ************ 2025-05-25 00:59:28.412467 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412471 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412476 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412480 | orchestrator | 2025-05-25 00:59:28.412485 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-25 00:59:28.412489 | orchestrator | Sunday 25 May 2025 00:51:18 +0000 (0:00:00.309) 0:04:39.284 ************ 2025-05-25 00:59:28.412494 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412498 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412503 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412507 | orchestrator | 2025-05-25 00:59:28.412512 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-25 00:59:28.412516 | orchestrator | Sunday 25 May 2025 00:51:19 +0000 (0:00:00.404) 0:04:39.689 ************ 2025-05-25 00:59:28.412521 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412525 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412530 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412534 | orchestrator | 2025-05-25 00:59:28.412539 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-25 00:59:28.412543 | orchestrator | Sunday 25 May 2025 00:51:19 +0000 (0:00:00.251) 0:04:39.940 ************ 2025-05-25 00:59:28.412548 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412552 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412566 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412570 | orchestrator | 2025-05-25 00:59:28.412575 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-25 00:59:28.412579 | orchestrator | Sunday 25 May 2025 00:51:19 +0000 (0:00:00.294) 0:04:40.234 ************ 2025-05-25 00:59:28.412584 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412593 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412598 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412602 | orchestrator | 2025-05-25 00:59:28.412607 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-25 00:59:28.412612 | orchestrator | Sunday 25 May 2025 00:51:19 +0000 (0:00:00.275) 0:04:40.510 ************ 2025-05-25 00:59:28.412616 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412621 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412625 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412630 | orchestrator | 2025-05-25 00:59:28.412634 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-25 00:59:28.412639 | orchestrator | Sunday 25 May 2025 00:51:20 +0000 (0:00:00.484) 0:04:40.994 ************ 2025-05-25 00:59:28.412643 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412648 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412652 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412657 | orchestrator | 2025-05-25 00:59:28.412661 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-25 00:59:28.412666 | orchestrator | Sunday 25 May 2025 00:51:20 +0000 (0:00:00.322) 0:04:41.317 ************ 2025-05-25 00:59:28.412670 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412675 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412679 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412684 | orchestrator | 2025-05-25 00:59:28.412689 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-25 00:59:28.412693 | orchestrator | Sunday 25 May 2025 00:51:21 +0000 (0:00:00.314) 0:04:41.631 ************ 2025-05-25 00:59:28.412698 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412702 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412707 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412711 | orchestrator | 2025-05-25 00:59:28.412716 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-25 00:59:28.412720 | orchestrator | Sunday 25 May 2025 00:51:21 +0000 (0:00:00.294) 0:04:41.925 ************ 2025-05-25 00:59:28.412725 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412729 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412734 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412738 | orchestrator | 2025-05-25 00:59:28.412743 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-25 00:59:28.412786 | orchestrator | Sunday 25 May 2025 00:51:21 +0000 (0:00:00.447) 0:04:42.373 ************ 2025-05-25 00:59:28.412793 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412797 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412802 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412806 | orchestrator | 2025-05-25 00:59:28.412811 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-25 00:59:28.412815 | orchestrator | Sunday 25 May 2025 00:51:22 +0000 (0:00:00.310) 0:04:42.684 ************ 2025-05-25 00:59:28.412820 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-25 00:59:28.412825 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-25 00:59:28.412829 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412834 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-25 00:59:28.412842 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-25 00:59:28.412847 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412851 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-25 00:59:28.412856 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-25 00:59:28.412860 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412864 | orchestrator | 2025-05-25 00:59:28.412869 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-25 00:59:28.412874 | orchestrator | Sunday 25 May 2025 00:51:22 +0000 (0:00:00.308) 0:04:42.992 ************ 2025-05-25 00:59:28.412898 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-25 00:59:28.412905 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-25 00:59:28.412909 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412914 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-25 00:59:28.412918 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-25 00:59:28.412923 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412927 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-25 00:59:28.412932 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-25 00:59:28.412936 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412941 | orchestrator | 2025-05-25 00:59:28.412945 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-25 00:59:28.412950 | orchestrator | Sunday 25 May 2025 00:51:22 +0000 (0:00:00.330) 0:04:43.323 ************ 2025-05-25 00:59:28.412954 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412959 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412963 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412968 | orchestrator | 2025-05-25 00:59:28.412972 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-25 00:59:28.412977 | orchestrator | Sunday 25 May 2025 00:51:23 +0000 (0:00:00.428) 0:04:43.752 ************ 2025-05-25 00:59:28.412982 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.412986 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.412990 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.412995 | orchestrator | 2025-05-25 00:59:28.413000 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-25 00:59:28.413004 | orchestrator | Sunday 25 May 2025 00:51:23 +0000 (0:00:00.305) 0:04:44.058 ************ 2025-05-25 00:59:28.413009 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413013 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413017 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413022 | orchestrator | 2025-05-25 00:59:28.413026 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-25 00:59:28.413031 | orchestrator | Sunday 25 May 2025 00:51:23 +0000 (0:00:00.345) 0:04:44.403 ************ 2025-05-25 00:59:28.413035 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413040 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413044 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413049 | orchestrator | 2025-05-25 00:59:28.413053 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-25 00:59:28.413058 | orchestrator | Sunday 25 May 2025 00:51:24 +0000 (0:00:00.313) 0:04:44.716 ************ 2025-05-25 00:59:28.413070 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413075 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413079 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413084 | orchestrator | 2025-05-25 00:59:28.413088 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-25 00:59:28.413093 | orchestrator | Sunday 25 May 2025 00:51:24 +0000 (0:00:00.455) 0:04:45.172 ************ 2025-05-25 00:59:28.413097 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413101 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413106 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413110 | orchestrator | 2025-05-25 00:59:28.413115 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-25 00:59:28.413119 | orchestrator | Sunday 25 May 2025 00:51:24 +0000 (0:00:00.332) 0:04:45.505 ************ 2025-05-25 00:59:28.413124 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.413128 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.413133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.413141 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413146 | orchestrator | 2025-05-25 00:59:28.413150 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-25 00:59:28.413155 | orchestrator | Sunday 25 May 2025 00:51:25 +0000 (0:00:00.388) 0:04:45.893 ************ 2025-05-25 00:59:28.413159 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.413164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.413168 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.413172 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413177 | orchestrator | 2025-05-25 00:59:28.413181 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-25 00:59:28.413186 | orchestrator | Sunday 25 May 2025 00:51:25 +0000 (0:00:00.402) 0:04:46.296 ************ 2025-05-25 00:59:28.413229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.413239 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.413246 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.413253 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413260 | orchestrator | 2025-05-25 00:59:28.413267 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.413274 | orchestrator | Sunday 25 May 2025 00:51:26 +0000 (0:00:00.384) 0:04:46.680 ************ 2025-05-25 00:59:28.413282 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413289 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413296 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413304 | orchestrator | 2025-05-25 00:59:28.413316 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-25 00:59:28.413321 | orchestrator | Sunday 25 May 2025 00:51:26 +0000 (0:00:00.291) 0:04:46.972 ************ 2025-05-25 00:59:28.413325 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-25 00:59:28.413330 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413335 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-25 00:59:28.413339 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413343 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-25 00:59:28.413348 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413352 | orchestrator | 2025-05-25 00:59:28.413357 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-25 00:59:28.413361 | orchestrator | Sunday 25 May 2025 00:51:27 +0000 (0:00:00.612) 0:04:47.584 ************ 2025-05-25 00:59:28.413366 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413370 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413375 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413379 | orchestrator | 2025-05-25 00:59:28.413383 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.413388 | orchestrator | Sunday 25 May 2025 00:51:27 +0000 (0:00:00.301) 0:04:47.886 ************ 2025-05-25 00:59:28.413392 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413397 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413401 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413406 | orchestrator | 2025-05-25 00:59:28.413410 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-25 00:59:28.413415 | orchestrator | Sunday 25 May 2025 00:51:27 +0000 (0:00:00.327) 0:04:48.213 ************ 2025-05-25 00:59:28.413419 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-25 00:59:28.413424 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413428 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-25 00:59:28.413432 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413437 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-25 00:59:28.413441 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413446 | orchestrator | 2025-05-25 00:59:28.413450 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-25 00:59:28.413464 | orchestrator | Sunday 25 May 2025 00:51:28 +0000 (0:00:00.406) 0:04:48.619 ************ 2025-05-25 00:59:28.413471 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413478 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413485 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413492 | orchestrator | 2025-05-25 00:59:28.413499 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-25 00:59:28.413506 | orchestrator | Sunday 25 May 2025 00:51:28 +0000 (0:00:00.461) 0:04:49.081 ************ 2025-05-25 00:59:28.413513 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.413520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.413527 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.413535 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413542 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-25 00:59:28.413550 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-25 00:59:28.413569 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-25 00:59:28.413576 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413584 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-25 00:59:28.413591 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-25 00:59:28.413598 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-25 00:59:28.413605 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413612 | orchestrator | 2025-05-25 00:59:28.413619 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-25 00:59:28.413627 | orchestrator | Sunday 25 May 2025 00:51:29 +0000 (0:00:00.546) 0:04:49.628 ************ 2025-05-25 00:59:28.413633 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413641 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413649 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413656 | orchestrator | 2025-05-25 00:59:28.413664 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-25 00:59:28.413671 | orchestrator | Sunday 25 May 2025 00:51:29 +0000 (0:00:00.638) 0:04:50.266 ************ 2025-05-25 00:59:28.413679 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413685 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413690 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413694 | orchestrator | 2025-05-25 00:59:28.413699 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-25 00:59:28.413703 | orchestrator | Sunday 25 May 2025 00:51:30 +0000 (0:00:00.484) 0:04:50.750 ************ 2025-05-25 00:59:28.413708 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413713 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413717 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413722 | orchestrator | 2025-05-25 00:59:28.413727 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-25 00:59:28.413731 | orchestrator | Sunday 25 May 2025 00:51:30 +0000 (0:00:00.644) 0:04:51.394 ************ 2025-05-25 00:59:28.413736 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413740 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.413768 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.413773 | orchestrator | 2025-05-25 00:59:28.413778 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-25 00:59:28.413782 | orchestrator | Sunday 25 May 2025 00:51:31 +0000 (0:00:00.544) 0:04:51.939 ************ 2025-05-25 00:59:28.413787 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.413791 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.413796 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.413800 | orchestrator | 2025-05-25 00:59:28.413805 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-25 00:59:28.413810 | orchestrator | Sunday 25 May 2025 00:51:31 +0000 (0:00:00.402) 0:04:52.342 ************ 2025-05-25 00:59:28.413823 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:28.413828 | orchestrator | 2025-05-25 00:59:28.413832 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-25 00:59:28.413837 | orchestrator | Sunday 25 May 2025 00:51:32 +0000 (0:00:00.943) 0:04:53.286 ************ 2025-05-25 00:59:28.413841 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.413846 | orchestrator | 2025-05-25 00:59:28.413850 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-25 00:59:28.413854 | orchestrator | Sunday 25 May 2025 00:51:32 +0000 (0:00:00.192) 0:04:53.478 ************ 2025-05-25 00:59:28.413859 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-25 00:59:28.413864 | orchestrator | 2025-05-25 00:59:28.413870 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-25 00:59:28.413875 | orchestrator | Sunday 25 May 2025 00:51:33 +0000 (0:00:00.774) 0:04:54.252 ************ 2025-05-25 00:59:28.413881 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.413899 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.413905 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.413910 | orchestrator | 2025-05-25 00:59:28.413916 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-25 00:59:28.413921 | orchestrator | Sunday 25 May 2025 00:51:34 +0000 (0:00:00.383) 0:04:54.636 ************ 2025-05-25 00:59:28.413926 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.413931 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.413937 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.413942 | orchestrator | 2025-05-25 00:59:28.413947 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-25 00:59:28.413952 | orchestrator | Sunday 25 May 2025 00:51:34 +0000 (0:00:00.692) 0:04:55.329 ************ 2025-05-25 00:59:28.413957 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.413962 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.413968 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.413973 | orchestrator | 2025-05-25 00:59:28.413978 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-25 00:59:28.413984 | orchestrator | Sunday 25 May 2025 00:51:35 +0000 (0:00:01.132) 0:04:56.461 ************ 2025-05-25 00:59:28.413989 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.413995 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.414000 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.414005 | orchestrator | 2025-05-25 00:59:28.414009 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-25 00:59:28.414030 | orchestrator | Sunday 25 May 2025 00:51:36 +0000 (0:00:00.745) 0:04:57.207 ************ 2025-05-25 00:59:28.414035 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.414040 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.414044 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.414049 | orchestrator | 2025-05-25 00:59:28.414054 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-25 00:59:28.414058 | orchestrator | Sunday 25 May 2025 00:51:37 +0000 (0:00:00.780) 0:04:57.988 ************ 2025-05-25 00:59:28.414063 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.414067 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.414072 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.414076 | orchestrator | 2025-05-25 00:59:28.414081 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-25 00:59:28.414085 | orchestrator | Sunday 25 May 2025 00:51:38 +0000 (0:00:00.649) 0:04:58.637 ************ 2025-05-25 00:59:28.414090 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.414094 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.414099 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.414103 | orchestrator | 2025-05-25 00:59:28.414108 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-25 00:59:28.414112 | orchestrator | Sunday 25 May 2025 00:51:38 +0000 (0:00:00.312) 0:04:58.950 ************ 2025-05-25 00:59:28.414121 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.414126 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.414130 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.414135 | orchestrator | 2025-05-25 00:59:28.414139 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-25 00:59:28.414144 | orchestrator | Sunday 25 May 2025 00:51:38 +0000 (0:00:00.275) 0:04:59.225 ************ 2025-05-25 00:59:28.414148 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.414152 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.414157 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.414162 | orchestrator | 2025-05-25 00:59:28.414166 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-25 00:59:28.414171 | orchestrator | Sunday 25 May 2025 00:51:39 +0000 (0:00:00.453) 0:04:59.679 ************ 2025-05-25 00:59:28.414176 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.414180 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.414185 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.414189 | orchestrator | 2025-05-25 00:59:28.414194 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-25 00:59:28.414198 | orchestrator | Sunday 25 May 2025 00:51:39 +0000 (0:00:00.290) 0:04:59.969 ************ 2025-05-25 00:59:28.414203 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.414207 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.414211 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.414216 | orchestrator | 2025-05-25 00:59:28.414220 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-25 00:59:28.414241 | orchestrator | Sunday 25 May 2025 00:51:40 +0000 (0:00:01.535) 0:05:01.505 ************ 2025-05-25 00:59:28.414246 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.414251 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.414255 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.414260 | orchestrator | 2025-05-25 00:59:28.414264 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-25 00:59:28.414269 | orchestrator | Sunday 25 May 2025 00:51:41 +0000 (0:00:00.337) 0:05:01.842 ************ 2025-05-25 00:59:28.414273 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:28.414278 | orchestrator | 2025-05-25 00:59:28.414282 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-25 00:59:28.414290 | orchestrator | Sunday 25 May 2025 00:51:41 +0000 (0:00:00.642) 0:05:02.485 ************ 2025-05-25 00:59:28.414294 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.414299 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.414303 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.414308 | orchestrator | 2025-05-25 00:59:28.414312 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-25 00:59:28.414317 | orchestrator | Sunday 25 May 2025 00:51:42 +0000 (0:00:00.283) 0:05:02.768 ************ 2025-05-25 00:59:28.414321 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.414326 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.414330 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.414335 | orchestrator | 2025-05-25 00:59:28.414340 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-25 00:59:28.414344 | orchestrator | Sunday 25 May 2025 00:51:42 +0000 (0:00:00.274) 0:05:03.043 ************ 2025-05-25 00:59:28.414349 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:28.414354 | orchestrator | 2025-05-25 00:59:28.414358 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-25 00:59:28.414363 | orchestrator | Sunday 25 May 2025 00:51:43 +0000 (0:00:00.544) 0:05:03.587 ************ 2025-05-25 00:59:28.414367 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.414371 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.414379 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.414384 | orchestrator | 2025-05-25 00:59:28.414388 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-25 00:59:28.414393 | orchestrator | Sunday 25 May 2025 00:51:44 +0000 (0:00:01.141) 0:05:04.729 ************ 2025-05-25 00:59:28.414397 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.414402 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.414406 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.414411 | orchestrator | 2025-05-25 00:59:28.414415 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-25 00:59:28.414420 | orchestrator | Sunday 25 May 2025 00:51:45 +0000 (0:00:01.306) 0:05:06.035 ************ 2025-05-25 00:59:28.414424 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.414429 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.414433 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.414438 | orchestrator | 2025-05-25 00:59:28.414442 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-25 00:59:28.414446 | orchestrator | Sunday 25 May 2025 00:51:47 +0000 (0:00:01.691) 0:05:07.727 ************ 2025-05-25 00:59:28.414451 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.414455 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.414460 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.414464 | orchestrator | 2025-05-25 00:59:28.414469 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-25 00:59:28.414473 | orchestrator | Sunday 25 May 2025 00:51:49 +0000 (0:00:02.216) 0:05:09.944 ************ 2025-05-25 00:59:28.414478 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:28.414482 | orchestrator | 2025-05-25 00:59:28.414487 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-25 00:59:28.414491 | orchestrator | Sunday 25 May 2025 00:51:50 +0000 (0:00:00.787) 0:05:10.731 ************ 2025-05-25 00:59:28.414496 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-25 00:59:28.414500 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.414504 | orchestrator | 2025-05-25 00:59:28.414509 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-25 00:59:28.414513 | orchestrator | Sunday 25 May 2025 00:52:11 +0000 (0:00:21.456) 0:05:32.188 ************ 2025-05-25 00:59:28.414518 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.414522 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.414527 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.414531 | orchestrator | 2025-05-25 00:59:28.414536 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-25 00:59:28.414540 | orchestrator | Sunday 25 May 2025 00:52:18 +0000 (0:00:07.036) 0:05:39.225 ************ 2025-05-25 00:59:28.414545 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.414549 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.414554 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.414558 | orchestrator | 2025-05-25 00:59:28.414562 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-25 00:59:28.414567 | orchestrator | Sunday 25 May 2025 00:52:19 +0000 (0:00:01.124) 0:05:40.349 ************ 2025-05-25 00:59:28.414572 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.414576 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.414581 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.414585 | orchestrator | 2025-05-25 00:59:28.414589 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-25 00:59:28.414594 | orchestrator | Sunday 25 May 2025 00:52:20 +0000 (0:00:00.701) 0:05:41.050 ************ 2025-05-25 00:59:28.414598 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:28.414603 | orchestrator | 2025-05-25 00:59:28.414607 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-25 00:59:28.414629 | orchestrator | Sunday 25 May 2025 00:52:21 +0000 (0:00:00.827) 0:05:41.878 ************ 2025-05-25 00:59:28.414635 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.414639 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.414644 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.414648 | orchestrator | 2025-05-25 00:59:28.414653 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-25 00:59:28.414657 | orchestrator | Sunday 25 May 2025 00:52:21 +0000 (0:00:00.369) 0:05:42.247 ************ 2025-05-25 00:59:28.414662 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.414667 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.414674 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.414682 | orchestrator | 2025-05-25 00:59:28.414692 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-25 00:59:28.414700 | orchestrator | Sunday 25 May 2025 00:52:22 +0000 (0:00:01.200) 0:05:43.447 ************ 2025-05-25 00:59:28.414707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 00:59:28.414714 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 00:59:28.414721 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 00:59:28.414727 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.414734 | orchestrator | 2025-05-25 00:59:28.414742 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-25 00:59:28.414749 | orchestrator | Sunday 25 May 2025 00:52:24 +0000 (0:00:01.134) 0:05:44.582 ************ 2025-05-25 00:59:28.414756 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.414764 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.414771 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.414779 | orchestrator | 2025-05-25 00:59:28.414786 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-25 00:59:28.414795 | orchestrator | Sunday 25 May 2025 00:52:24 +0000 (0:00:00.376) 0:05:44.958 ************ 2025-05-25 00:59:28.414802 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.414809 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.414816 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.414824 | orchestrator | 2025-05-25 00:59:28.414831 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-25 00:59:28.414838 | orchestrator | 2025-05-25 00:59:28.414846 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-25 00:59:28.414853 | orchestrator | Sunday 25 May 2025 00:52:26 +0000 (0:00:02.128) 0:05:47.087 ************ 2025-05-25 00:59:28.414861 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:28.414868 | orchestrator | 2025-05-25 00:59:28.414876 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-25 00:59:28.414883 | orchestrator | Sunday 25 May 2025 00:52:27 +0000 (0:00:00.754) 0:05:47.841 ************ 2025-05-25 00:59:28.414921 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.414926 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.414930 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.414935 | orchestrator | 2025-05-25 00:59:28.414940 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-25 00:59:28.414944 | orchestrator | Sunday 25 May 2025 00:52:27 +0000 (0:00:00.703) 0:05:48.544 ************ 2025-05-25 00:59:28.414949 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.414953 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.414958 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.414962 | orchestrator | 2025-05-25 00:59:28.414967 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-25 00:59:28.414972 | orchestrator | Sunday 25 May 2025 00:52:28 +0000 (0:00:00.321) 0:05:48.865 ************ 2025-05-25 00:59:28.414976 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.414981 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.414991 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.414995 | orchestrator | 2025-05-25 00:59:28.415000 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-25 00:59:28.415004 | orchestrator | Sunday 25 May 2025 00:52:28 +0000 (0:00:00.593) 0:05:49.459 ************ 2025-05-25 00:59:28.415009 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415014 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415018 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415023 | orchestrator | 2025-05-25 00:59:28.415027 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-25 00:59:28.415032 | orchestrator | Sunday 25 May 2025 00:52:29 +0000 (0:00:00.319) 0:05:49.779 ************ 2025-05-25 00:59:28.415036 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.415041 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.415046 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.415050 | orchestrator | 2025-05-25 00:59:28.415055 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-25 00:59:28.415059 | orchestrator | Sunday 25 May 2025 00:52:29 +0000 (0:00:00.713) 0:05:50.493 ************ 2025-05-25 00:59:28.415064 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415069 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415073 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415078 | orchestrator | 2025-05-25 00:59:28.415082 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-25 00:59:28.415087 | orchestrator | Sunday 25 May 2025 00:52:30 +0000 (0:00:00.345) 0:05:50.838 ************ 2025-05-25 00:59:28.415091 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415096 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415101 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415105 | orchestrator | 2025-05-25 00:59:28.415110 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-25 00:59:28.415114 | orchestrator | Sunday 25 May 2025 00:52:30 +0000 (0:00:00.626) 0:05:51.465 ************ 2025-05-25 00:59:28.415119 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415123 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415128 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415132 | orchestrator | 2025-05-25 00:59:28.415137 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-25 00:59:28.415161 | orchestrator | Sunday 25 May 2025 00:52:31 +0000 (0:00:00.345) 0:05:51.811 ************ 2025-05-25 00:59:28.415166 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415171 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415175 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415180 | orchestrator | 2025-05-25 00:59:28.415184 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-25 00:59:28.415189 | orchestrator | Sunday 25 May 2025 00:52:31 +0000 (0:00:00.334) 0:05:52.146 ************ 2025-05-25 00:59:28.415194 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415198 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415203 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415207 | orchestrator | 2025-05-25 00:59:28.415212 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-25 00:59:28.415220 | orchestrator | Sunday 25 May 2025 00:52:31 +0000 (0:00:00.309) 0:05:52.455 ************ 2025-05-25 00:59:28.415225 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.415229 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.415234 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.415238 | orchestrator | 2025-05-25 00:59:28.415243 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-25 00:59:28.415248 | orchestrator | Sunday 25 May 2025 00:52:32 +0000 (0:00:01.016) 0:05:53.471 ************ 2025-05-25 00:59:28.415252 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415257 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415261 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415272 | orchestrator | 2025-05-25 00:59:28.415277 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-25 00:59:28.415281 | orchestrator | Sunday 25 May 2025 00:52:33 +0000 (0:00:00.338) 0:05:53.810 ************ 2025-05-25 00:59:28.415286 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.415291 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.415295 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.415300 | orchestrator | 2025-05-25 00:59:28.415304 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-25 00:59:28.415309 | orchestrator | Sunday 25 May 2025 00:52:33 +0000 (0:00:00.340) 0:05:54.151 ************ 2025-05-25 00:59:28.415313 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415318 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415322 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415327 | orchestrator | 2025-05-25 00:59:28.415331 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-25 00:59:28.415336 | orchestrator | Sunday 25 May 2025 00:52:33 +0000 (0:00:00.325) 0:05:54.476 ************ 2025-05-25 00:59:28.415340 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415345 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415349 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415354 | orchestrator | 2025-05-25 00:59:28.415358 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-25 00:59:28.415363 | orchestrator | Sunday 25 May 2025 00:52:34 +0000 (0:00:00.587) 0:05:55.064 ************ 2025-05-25 00:59:28.415368 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415372 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415377 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415381 | orchestrator | 2025-05-25 00:59:28.415386 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-25 00:59:28.415390 | orchestrator | Sunday 25 May 2025 00:52:34 +0000 (0:00:00.366) 0:05:55.430 ************ 2025-05-25 00:59:28.415395 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415400 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415404 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415409 | orchestrator | 2025-05-25 00:59:28.415413 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-25 00:59:28.415418 | orchestrator | Sunday 25 May 2025 00:52:35 +0000 (0:00:00.338) 0:05:55.769 ************ 2025-05-25 00:59:28.415422 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415426 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415430 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415435 | orchestrator | 2025-05-25 00:59:28.415439 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-25 00:59:28.415443 | orchestrator | Sunday 25 May 2025 00:52:35 +0000 (0:00:00.431) 0:05:56.200 ************ 2025-05-25 00:59:28.415447 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.415451 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.415455 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.415459 | orchestrator | 2025-05-25 00:59:28.415463 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-25 00:59:28.415467 | orchestrator | Sunday 25 May 2025 00:52:36 +0000 (0:00:00.616) 0:05:56.817 ************ 2025-05-25 00:59:28.415472 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.415476 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.415480 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.415484 | orchestrator | 2025-05-25 00:59:28.415488 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-25 00:59:28.415492 | orchestrator | Sunday 25 May 2025 00:52:36 +0000 (0:00:00.372) 0:05:57.190 ************ 2025-05-25 00:59:28.415496 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415500 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415505 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415509 | orchestrator | 2025-05-25 00:59:28.415513 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-25 00:59:28.415520 | orchestrator | Sunday 25 May 2025 00:52:36 +0000 (0:00:00.358) 0:05:57.548 ************ 2025-05-25 00:59:28.415525 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415529 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415533 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415537 | orchestrator | 2025-05-25 00:59:28.415541 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-25 00:59:28.415545 | orchestrator | Sunday 25 May 2025 00:52:37 +0000 (0:00:00.382) 0:05:57.931 ************ 2025-05-25 00:59:28.415549 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415553 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415558 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415562 | orchestrator | 2025-05-25 00:59:28.415566 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-25 00:59:28.415570 | orchestrator | Sunday 25 May 2025 00:52:37 +0000 (0:00:00.551) 0:05:58.482 ************ 2025-05-25 00:59:28.415586 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415591 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415595 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415600 | orchestrator | 2025-05-25 00:59:28.415604 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-25 00:59:28.415608 | orchestrator | Sunday 25 May 2025 00:52:38 +0000 (0:00:00.376) 0:05:58.858 ************ 2025-05-25 00:59:28.415612 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415616 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415620 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415624 | orchestrator | 2025-05-25 00:59:28.415628 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-25 00:59:28.415637 | orchestrator | Sunday 25 May 2025 00:52:38 +0000 (0:00:00.355) 0:05:59.214 ************ 2025-05-25 00:59:28.415641 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415646 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415650 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415654 | orchestrator | 2025-05-25 00:59:28.415658 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-25 00:59:28.415662 | orchestrator | Sunday 25 May 2025 00:52:39 +0000 (0:00:00.344) 0:05:59.558 ************ 2025-05-25 00:59:28.415666 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415670 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415674 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415678 | orchestrator | 2025-05-25 00:59:28.415683 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-25 00:59:28.415687 | orchestrator | Sunday 25 May 2025 00:52:39 +0000 (0:00:00.650) 0:06:00.208 ************ 2025-05-25 00:59:28.415691 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415695 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415699 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415703 | orchestrator | 2025-05-25 00:59:28.415707 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-25 00:59:28.415711 | orchestrator | Sunday 25 May 2025 00:52:40 +0000 (0:00:00.467) 0:06:00.675 ************ 2025-05-25 00:59:28.415715 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415720 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415724 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415728 | orchestrator | 2025-05-25 00:59:28.415732 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-25 00:59:28.415736 | orchestrator | Sunday 25 May 2025 00:52:40 +0000 (0:00:00.403) 0:06:01.078 ************ 2025-05-25 00:59:28.415740 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415744 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415748 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415753 | orchestrator | 2025-05-25 00:59:28.415757 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-25 00:59:28.415774 | orchestrator | Sunday 25 May 2025 00:52:40 +0000 (0:00:00.314) 0:06:01.393 ************ 2025-05-25 00:59:28.415778 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415782 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415786 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415790 | orchestrator | 2025-05-25 00:59:28.415795 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-25 00:59:28.415799 | orchestrator | Sunday 25 May 2025 00:52:41 +0000 (0:00:00.646) 0:06:02.039 ************ 2025-05-25 00:59:28.415803 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415807 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415811 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415815 | orchestrator | 2025-05-25 00:59:28.415820 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-25 00:59:28.415824 | orchestrator | Sunday 25 May 2025 00:52:41 +0000 (0:00:00.341) 0:06:02.381 ************ 2025-05-25 00:59:28.415828 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-25 00:59:28.415832 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-25 00:59:28.415836 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-25 00:59:28.415840 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-25 00:59:28.415844 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415849 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415853 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-25 00:59:28.415857 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-25 00:59:28.415861 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415865 | orchestrator | 2025-05-25 00:59:28.415869 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-25 00:59:28.415873 | orchestrator | Sunday 25 May 2025 00:52:42 +0000 (0:00:00.486) 0:06:02.867 ************ 2025-05-25 00:59:28.415877 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-25 00:59:28.415881 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-25 00:59:28.415899 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-25 00:59:28.415907 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-25 00:59:28.415914 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415920 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415926 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-25 00:59:28.415934 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-25 00:59:28.415941 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415947 | orchestrator | 2025-05-25 00:59:28.415954 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-25 00:59:28.415961 | orchestrator | Sunday 25 May 2025 00:52:42 +0000 (0:00:00.373) 0:06:03.241 ************ 2025-05-25 00:59:28.415968 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.415976 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.415980 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.415984 | orchestrator | 2025-05-25 00:59:28.415988 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-25 00:59:28.416008 | orchestrator | Sunday 25 May 2025 00:52:43 +0000 (0:00:00.610) 0:06:03.852 ************ 2025-05-25 00:59:28.416013 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416017 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416021 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416025 | orchestrator | 2025-05-25 00:59:28.416029 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-25 00:59:28.416034 | orchestrator | Sunday 25 May 2025 00:52:43 +0000 (0:00:00.360) 0:06:04.212 ************ 2025-05-25 00:59:28.416038 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416047 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416051 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416055 | orchestrator | 2025-05-25 00:59:28.416062 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-25 00:59:28.416066 | orchestrator | Sunday 25 May 2025 00:52:43 +0000 (0:00:00.332) 0:06:04.545 ************ 2025-05-25 00:59:28.416071 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416075 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416079 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416083 | orchestrator | 2025-05-25 00:59:28.416087 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-25 00:59:28.416091 | orchestrator | Sunday 25 May 2025 00:52:44 +0000 (0:00:00.341) 0:06:04.887 ************ 2025-05-25 00:59:28.416096 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416100 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416104 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416108 | orchestrator | 2025-05-25 00:59:28.416112 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-25 00:59:28.416117 | orchestrator | Sunday 25 May 2025 00:52:44 +0000 (0:00:00.564) 0:06:05.452 ************ 2025-05-25 00:59:28.416121 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416125 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416129 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416133 | orchestrator | 2025-05-25 00:59:28.416137 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-25 00:59:28.416141 | orchestrator | Sunday 25 May 2025 00:52:45 +0000 (0:00:00.373) 0:06:05.825 ************ 2025-05-25 00:59:28.416145 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.416150 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.416154 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.416158 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416162 | orchestrator | 2025-05-25 00:59:28.416166 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-25 00:59:28.416170 | orchestrator | Sunday 25 May 2025 00:52:45 +0000 (0:00:00.441) 0:06:06.267 ************ 2025-05-25 00:59:28.416174 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.416179 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.416183 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.416187 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416191 | orchestrator | 2025-05-25 00:59:28.416195 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-25 00:59:28.416199 | orchestrator | Sunday 25 May 2025 00:52:46 +0000 (0:00:00.419) 0:06:06.686 ************ 2025-05-25 00:59:28.416203 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.416207 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.416211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.416216 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416220 | orchestrator | 2025-05-25 00:59:28.416224 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.416228 | orchestrator | Sunday 25 May 2025 00:52:46 +0000 (0:00:00.415) 0:06:07.102 ************ 2025-05-25 00:59:28.416232 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416236 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416241 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416245 | orchestrator | 2025-05-25 00:59:28.416249 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-25 00:59:28.416253 | orchestrator | Sunday 25 May 2025 00:52:46 +0000 (0:00:00.338) 0:06:07.440 ************ 2025-05-25 00:59:28.416257 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-25 00:59:28.416261 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416268 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-25 00:59:28.416272 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416276 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-25 00:59:28.416281 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416285 | orchestrator | 2025-05-25 00:59:28.416289 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-25 00:59:28.416293 | orchestrator | Sunday 25 May 2025 00:52:47 +0000 (0:00:00.823) 0:06:08.264 ************ 2025-05-25 00:59:28.416297 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416301 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416305 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416309 | orchestrator | 2025-05-25 00:59:28.416313 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.416318 | orchestrator | Sunday 25 May 2025 00:52:48 +0000 (0:00:00.330) 0:06:08.594 ************ 2025-05-25 00:59:28.416322 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416326 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416330 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416334 | orchestrator | 2025-05-25 00:59:28.416338 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-25 00:59:28.416342 | orchestrator | Sunday 25 May 2025 00:52:48 +0000 (0:00:00.342) 0:06:08.936 ************ 2025-05-25 00:59:28.416347 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-25 00:59:28.416351 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416355 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-25 00:59:28.416359 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416375 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-25 00:59:28.416380 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416384 | orchestrator | 2025-05-25 00:59:28.416388 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-25 00:59:28.416392 | orchestrator | Sunday 25 May 2025 00:52:49 +0000 (0:00:00.690) 0:06:09.626 ************ 2025-05-25 00:59:28.416396 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416400 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416404 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416408 | orchestrator | 2025-05-25 00:59:28.416412 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-25 00:59:28.416416 | orchestrator | Sunday 25 May 2025 00:52:49 +0000 (0:00:00.344) 0:06:09.971 ************ 2025-05-25 00:59:28.416423 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.416427 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.416431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.416435 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416439 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-25 00:59:28.416443 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-25 00:59:28.416447 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-25 00:59:28.416451 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416455 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-25 00:59:28.416459 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-25 00:59:28.416464 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-25 00:59:28.416468 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416472 | orchestrator | 2025-05-25 00:59:28.416476 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-25 00:59:28.416480 | orchestrator | Sunday 25 May 2025 00:52:50 +0000 (0:00:00.618) 0:06:10.589 ************ 2025-05-25 00:59:28.416484 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416488 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416492 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416499 | orchestrator | 2025-05-25 00:59:28.416503 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-25 00:59:28.416507 | orchestrator | Sunday 25 May 2025 00:52:50 +0000 (0:00:00.841) 0:06:11.431 ************ 2025-05-25 00:59:28.416512 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416516 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416520 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416524 | orchestrator | 2025-05-25 00:59:28.416528 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-25 00:59:28.416532 | orchestrator | Sunday 25 May 2025 00:52:51 +0000 (0:00:00.596) 0:06:12.027 ************ 2025-05-25 00:59:28.416536 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416540 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416544 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416548 | orchestrator | 2025-05-25 00:59:28.416552 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-25 00:59:28.416556 | orchestrator | Sunday 25 May 2025 00:52:52 +0000 (0:00:00.770) 0:06:12.797 ************ 2025-05-25 00:59:28.416560 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416564 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416568 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416573 | orchestrator | 2025-05-25 00:59:28.416577 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-25 00:59:28.416581 | orchestrator | Sunday 25 May 2025 00:52:52 +0000 (0:00:00.570) 0:06:13.368 ************ 2025-05-25 00:59:28.416585 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 00:59:28.416589 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 00:59:28.416593 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 00:59:28.416597 | orchestrator | 2025-05-25 00:59:28.416601 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-25 00:59:28.416605 | orchestrator | Sunday 25 May 2025 00:52:53 +0000 (0:00:00.891) 0:06:14.260 ************ 2025-05-25 00:59:28.416610 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:28.416614 | orchestrator | 2025-05-25 00:59:28.416618 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-25 00:59:28.416622 | orchestrator | Sunday 25 May 2025 00:52:54 +0000 (0:00:00.808) 0:06:15.068 ************ 2025-05-25 00:59:28.416626 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.416630 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.416634 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.416638 | orchestrator | 2025-05-25 00:59:28.416642 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-25 00:59:28.416646 | orchestrator | Sunday 25 May 2025 00:52:55 +0000 (0:00:00.715) 0:06:15.784 ************ 2025-05-25 00:59:28.416650 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416654 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416658 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416663 | orchestrator | 2025-05-25 00:59:28.416667 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-25 00:59:28.416671 | orchestrator | Sunday 25 May 2025 00:52:55 +0000 (0:00:00.357) 0:06:16.142 ************ 2025-05-25 00:59:28.416675 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 00:59:28.416679 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 00:59:28.416683 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 00:59:28.416687 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-25 00:59:28.416691 | orchestrator | 2025-05-25 00:59:28.416695 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-25 00:59:28.416699 | orchestrator | Sunday 25 May 2025 00:53:03 +0000 (0:00:08.265) 0:06:24.407 ************ 2025-05-25 00:59:28.416718 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.416734 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.416743 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.416749 | orchestrator | 2025-05-25 00:59:28.416756 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-25 00:59:28.416762 | orchestrator | Sunday 25 May 2025 00:53:04 +0000 (0:00:00.388) 0:06:24.796 ************ 2025-05-25 00:59:28.416769 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-25 00:59:28.416775 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-25 00:59:28.416781 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-25 00:59:28.416788 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-25 00:59:28.416798 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 00:59:28.416805 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 00:59:28.416812 | orchestrator | 2025-05-25 00:59:28.416818 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-25 00:59:28.416825 | orchestrator | Sunday 25 May 2025 00:53:06 +0000 (0:00:02.093) 0:06:26.889 ************ 2025-05-25 00:59:28.416831 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-25 00:59:28.416835 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-25 00:59:28.416839 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-25 00:59:28.416843 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-25 00:59:28.416847 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-25 00:59:28.416851 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-25 00:59:28.416855 | orchestrator | 2025-05-25 00:59:28.416859 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-25 00:59:28.416864 | orchestrator | Sunday 25 May 2025 00:53:07 +0000 (0:00:01.256) 0:06:28.146 ************ 2025-05-25 00:59:28.416868 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.416872 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.416876 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.416880 | orchestrator | 2025-05-25 00:59:28.416884 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-25 00:59:28.416920 | orchestrator | Sunday 25 May 2025 00:53:08 +0000 (0:00:00.684) 0:06:28.831 ************ 2025-05-25 00:59:28.416925 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416929 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416933 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416937 | orchestrator | 2025-05-25 00:59:28.416941 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-25 00:59:28.416945 | orchestrator | Sunday 25 May 2025 00:53:08 +0000 (0:00:00.544) 0:06:29.375 ************ 2025-05-25 00:59:28.416949 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.416954 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.416958 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.416962 | orchestrator | 2025-05-25 00:59:28.416966 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-25 00:59:28.416970 | orchestrator | Sunday 25 May 2025 00:53:09 +0000 (0:00:00.340) 0:06:29.716 ************ 2025-05-25 00:59:28.416974 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:28.416978 | orchestrator | 2025-05-25 00:59:28.416983 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-25 00:59:28.416990 | orchestrator | Sunday 25 May 2025 00:53:09 +0000 (0:00:00.578) 0:06:30.294 ************ 2025-05-25 00:59:28.416999 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.417007 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.417014 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.417020 | orchestrator | 2025-05-25 00:59:28.417027 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-25 00:59:28.417033 | orchestrator | Sunday 25 May 2025 00:53:10 +0000 (0:00:00.659) 0:06:30.954 ************ 2025-05-25 00:59:28.417048 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.417054 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.417062 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.417068 | orchestrator | 2025-05-25 00:59:28.417076 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-25 00:59:28.417080 | orchestrator | Sunday 25 May 2025 00:53:10 +0000 (0:00:00.393) 0:06:31.347 ************ 2025-05-25 00:59:28.417085 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:28.417089 | orchestrator | 2025-05-25 00:59:28.417093 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-25 00:59:28.417097 | orchestrator | Sunday 25 May 2025 00:53:11 +0000 (0:00:00.594) 0:06:31.942 ************ 2025-05-25 00:59:28.417101 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.417105 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.417109 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.417113 | orchestrator | 2025-05-25 00:59:28.417117 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-25 00:59:28.417121 | orchestrator | Sunday 25 May 2025 00:53:12 +0000 (0:00:01.152) 0:06:33.095 ************ 2025-05-25 00:59:28.417125 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.417129 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.417133 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.417137 | orchestrator | 2025-05-25 00:59:28.417141 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-25 00:59:28.417145 | orchestrator | Sunday 25 May 2025 00:53:13 +0000 (0:00:01.117) 0:06:34.213 ************ 2025-05-25 00:59:28.417149 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.417153 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.417157 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.417161 | orchestrator | 2025-05-25 00:59:28.417166 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-25 00:59:28.417170 | orchestrator | Sunday 25 May 2025 00:53:15 +0000 (0:00:01.690) 0:06:35.904 ************ 2025-05-25 00:59:28.417174 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.417178 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.417204 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.417209 | orchestrator | 2025-05-25 00:59:28.417214 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-25 00:59:28.417218 | orchestrator | Sunday 25 May 2025 00:53:17 +0000 (0:00:01.895) 0:06:37.800 ************ 2025-05-25 00:59:28.417222 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.417226 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.417230 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-25 00:59:28.417234 | orchestrator | 2025-05-25 00:59:28.417238 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-25 00:59:28.417246 | orchestrator | Sunday 25 May 2025 00:53:17 +0000 (0:00:00.577) 0:06:38.377 ************ 2025-05-25 00:59:28.417250 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-25 00:59:28.417254 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-25 00:59:28.417259 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-25 00:59:28.417262 | orchestrator | 2025-05-25 00:59:28.417266 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-25 00:59:28.417270 | orchestrator | Sunday 25 May 2025 00:53:31 +0000 (0:00:13.264) 0:06:51.642 ************ 2025-05-25 00:59:28.417274 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-25 00:59:28.417277 | orchestrator | 2025-05-25 00:59:28.417281 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-25 00:59:28.417285 | orchestrator | Sunday 25 May 2025 00:53:32 +0000 (0:00:01.723) 0:06:53.366 ************ 2025-05-25 00:59:28.417293 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.417296 | orchestrator | 2025-05-25 00:59:28.417300 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-25 00:59:28.417304 | orchestrator | Sunday 25 May 2025 00:53:33 +0000 (0:00:00.467) 0:06:53.833 ************ 2025-05-25 00:59:28.417308 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.417312 | orchestrator | 2025-05-25 00:59:28.417315 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-25 00:59:28.417319 | orchestrator | Sunday 25 May 2025 00:53:33 +0000 (0:00:00.310) 0:06:54.144 ************ 2025-05-25 00:59:28.417323 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-25 00:59:28.417327 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-25 00:59:28.417330 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-25 00:59:28.417334 | orchestrator | 2025-05-25 00:59:28.417340 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-25 00:59:28.417344 | orchestrator | Sunday 25 May 2025 00:53:40 +0000 (0:00:06.515) 0:07:00.659 ************ 2025-05-25 00:59:28.417348 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-25 00:59:28.417352 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-25 00:59:28.417355 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-25 00:59:28.417359 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-25 00:59:28.417363 | orchestrator | 2025-05-25 00:59:28.417366 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-25 00:59:28.417370 | orchestrator | Sunday 25 May 2025 00:53:45 +0000 (0:00:05.834) 0:07:06.493 ************ 2025-05-25 00:59:28.417374 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.417378 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.417381 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.417385 | orchestrator | 2025-05-25 00:59:28.417389 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-25 00:59:28.417392 | orchestrator | Sunday 25 May 2025 00:53:46 +0000 (0:00:00.717) 0:07:07.211 ************ 2025-05-25 00:59:28.417396 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:28.417400 | orchestrator | 2025-05-25 00:59:28.417404 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-25 00:59:28.417407 | orchestrator | Sunday 25 May 2025 00:53:47 +0000 (0:00:00.744) 0:07:07.956 ************ 2025-05-25 00:59:28.417411 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.417415 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.417418 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.417422 | orchestrator | 2025-05-25 00:59:28.417426 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-25 00:59:28.417430 | orchestrator | Sunday 25 May 2025 00:53:47 +0000 (0:00:00.367) 0:07:08.323 ************ 2025-05-25 00:59:28.417433 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.417437 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.417441 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.417445 | orchestrator | 2025-05-25 00:59:28.417448 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-25 00:59:28.417452 | orchestrator | Sunday 25 May 2025 00:53:48 +0000 (0:00:01.208) 0:07:09.532 ************ 2025-05-25 00:59:28.417456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 00:59:28.417459 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 00:59:28.417463 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 00:59:28.417467 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.417471 | orchestrator | 2025-05-25 00:59:28.417474 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-25 00:59:28.417481 | orchestrator | Sunday 25 May 2025 00:53:50 +0000 (0:00:01.138) 0:07:10.670 ************ 2025-05-25 00:59:28.417485 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.417489 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.417492 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.417496 | orchestrator | 2025-05-25 00:59:28.417513 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-25 00:59:28.417517 | orchestrator | Sunday 25 May 2025 00:53:50 +0000 (0:00:00.356) 0:07:11.027 ************ 2025-05-25 00:59:28.417521 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.417525 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.417528 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.417532 | orchestrator | 2025-05-25 00:59:28.417536 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-25 00:59:28.417540 | orchestrator | 2025-05-25 00:59:28.417543 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-25 00:59:28.417547 | orchestrator | Sunday 25 May 2025 00:53:52 +0000 (0:00:01.975) 0:07:13.002 ************ 2025-05-25 00:59:28.417553 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.417557 | orchestrator | 2025-05-25 00:59:28.417561 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-25 00:59:28.417564 | orchestrator | Sunday 25 May 2025 00:53:53 +0000 (0:00:00.810) 0:07:13.813 ************ 2025-05-25 00:59:28.417568 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.417572 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.417575 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.417579 | orchestrator | 2025-05-25 00:59:28.417583 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-25 00:59:28.417587 | orchestrator | Sunday 25 May 2025 00:53:53 +0000 (0:00:00.344) 0:07:14.157 ************ 2025-05-25 00:59:28.417590 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.417594 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.417598 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.417601 | orchestrator | 2025-05-25 00:59:28.417605 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-25 00:59:28.417609 | orchestrator | Sunday 25 May 2025 00:53:54 +0000 (0:00:00.646) 0:07:14.804 ************ 2025-05-25 00:59:28.417613 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.417616 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.417620 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.417624 | orchestrator | 2025-05-25 00:59:28.417627 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-25 00:59:28.417631 | orchestrator | Sunday 25 May 2025 00:53:55 +0000 (0:00:01.041) 0:07:15.846 ************ 2025-05-25 00:59:28.417635 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.417638 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.417642 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.417646 | orchestrator | 2025-05-25 00:59:28.417649 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-25 00:59:28.417653 | orchestrator | Sunday 25 May 2025 00:53:55 +0000 (0:00:00.693) 0:07:16.540 ************ 2025-05-25 00:59:28.417657 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.417660 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.417664 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.417668 | orchestrator | 2025-05-25 00:59:28.417672 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-25 00:59:28.417675 | orchestrator | Sunday 25 May 2025 00:53:56 +0000 (0:00:00.312) 0:07:16.852 ************ 2025-05-25 00:59:28.417679 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.417683 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.417686 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.417690 | orchestrator | 2025-05-25 00:59:28.417694 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-25 00:59:28.417701 | orchestrator | Sunday 25 May 2025 00:53:56 +0000 (0:00:00.323) 0:07:17.176 ************ 2025-05-25 00:59:28.417705 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.417709 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.417712 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.417716 | orchestrator | 2025-05-25 00:59:28.417720 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-25 00:59:28.417724 | orchestrator | Sunday 25 May 2025 00:53:57 +0000 (0:00:00.534) 0:07:17.710 ************ 2025-05-25 00:59:28.417727 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.417731 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.417735 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.417738 | orchestrator | 2025-05-25 00:59:28.417742 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-25 00:59:28.417746 | orchestrator | Sunday 25 May 2025 00:53:57 +0000 (0:00:00.318) 0:07:18.029 ************ 2025-05-25 00:59:28.417750 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.417753 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.417757 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.417761 | orchestrator | 2025-05-25 00:59:28.417764 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-25 00:59:28.417768 | orchestrator | Sunday 25 May 2025 00:53:57 +0000 (0:00:00.355) 0:07:18.384 ************ 2025-05-25 00:59:28.417772 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.417775 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.417779 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.417783 | orchestrator | 2025-05-25 00:59:28.417786 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-25 00:59:28.417790 | orchestrator | Sunday 25 May 2025 00:53:58 +0000 (0:00:00.317) 0:07:18.702 ************ 2025-05-25 00:59:28.417794 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.417798 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.417801 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.417805 | orchestrator | 2025-05-25 00:59:28.417809 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-25 00:59:28.417812 | orchestrator | Sunday 25 May 2025 00:53:59 +0000 (0:00:01.071) 0:07:19.773 ************ 2025-05-25 00:59:28.417816 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.417820 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.417824 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.417827 | orchestrator | 2025-05-25 00:59:28.417831 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-25 00:59:28.417835 | orchestrator | Sunday 25 May 2025 00:53:59 +0000 (0:00:00.346) 0:07:20.119 ************ 2025-05-25 00:59:28.417839 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.417854 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.417858 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.417862 | orchestrator | 2025-05-25 00:59:28.417865 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-25 00:59:28.417869 | orchestrator | Sunday 25 May 2025 00:53:59 +0000 (0:00:00.304) 0:07:20.424 ************ 2025-05-25 00:59:28.417873 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.417876 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.417880 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.417902 | orchestrator | 2025-05-25 00:59:28.417907 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-25 00:59:28.417911 | orchestrator | Sunday 25 May 2025 00:54:00 +0000 (0:00:00.358) 0:07:20.783 ************ 2025-05-25 00:59:28.417915 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.417923 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.417927 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.417931 | orchestrator | 2025-05-25 00:59:28.417935 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-25 00:59:28.417939 | orchestrator | Sunday 25 May 2025 00:54:00 +0000 (0:00:00.681) 0:07:21.465 ************ 2025-05-25 00:59:28.417946 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.417950 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.417953 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.417957 | orchestrator | 2025-05-25 00:59:28.417961 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-25 00:59:28.417965 | orchestrator | Sunday 25 May 2025 00:54:01 +0000 (0:00:00.401) 0:07:21.866 ************ 2025-05-25 00:59:28.417969 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.417972 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.417976 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.417980 | orchestrator | 2025-05-25 00:59:28.417984 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-25 00:59:28.417988 | orchestrator | Sunday 25 May 2025 00:54:01 +0000 (0:00:00.347) 0:07:22.214 ************ 2025-05-25 00:59:28.417991 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.417995 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.417999 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418003 | orchestrator | 2025-05-25 00:59:28.418007 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-25 00:59:28.418011 | orchestrator | Sunday 25 May 2025 00:54:01 +0000 (0:00:00.307) 0:07:22.521 ************ 2025-05-25 00:59:28.418031 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418035 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418039 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418043 | orchestrator | 2025-05-25 00:59:28.418046 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-25 00:59:28.418050 | orchestrator | Sunday 25 May 2025 00:54:02 +0000 (0:00:00.547) 0:07:23.069 ************ 2025-05-25 00:59:28.418054 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.418058 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.418061 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.418065 | orchestrator | 2025-05-25 00:59:28.418069 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-25 00:59:28.418073 | orchestrator | Sunday 25 May 2025 00:54:02 +0000 (0:00:00.353) 0:07:23.422 ************ 2025-05-25 00:59:28.418076 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418080 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418084 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418087 | orchestrator | 2025-05-25 00:59:28.418091 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-25 00:59:28.418095 | orchestrator | Sunday 25 May 2025 00:54:03 +0000 (0:00:00.318) 0:07:23.741 ************ 2025-05-25 00:59:28.418099 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418102 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418106 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418110 | orchestrator | 2025-05-25 00:59:28.418114 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-25 00:59:28.418117 | orchestrator | Sunday 25 May 2025 00:54:03 +0000 (0:00:00.342) 0:07:24.084 ************ 2025-05-25 00:59:28.418121 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418125 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418128 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418132 | orchestrator | 2025-05-25 00:59:28.418136 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-25 00:59:28.418139 | orchestrator | Sunday 25 May 2025 00:54:04 +0000 (0:00:00.593) 0:07:24.677 ************ 2025-05-25 00:59:28.418143 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418147 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418151 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418154 | orchestrator | 2025-05-25 00:59:28.418160 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-25 00:59:28.418166 | orchestrator | Sunday 25 May 2025 00:54:04 +0000 (0:00:00.324) 0:07:25.001 ************ 2025-05-25 00:59:28.418177 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418183 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418188 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418194 | orchestrator | 2025-05-25 00:59:28.418200 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-25 00:59:28.418206 | orchestrator | Sunday 25 May 2025 00:54:04 +0000 (0:00:00.327) 0:07:25.329 ************ 2025-05-25 00:59:28.418212 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418218 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418224 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418230 | orchestrator | 2025-05-25 00:59:28.418236 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-25 00:59:28.418243 | orchestrator | Sunday 25 May 2025 00:54:05 +0000 (0:00:00.307) 0:07:25.637 ************ 2025-05-25 00:59:28.418248 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418252 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418256 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418259 | orchestrator | 2025-05-25 00:59:28.418263 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-25 00:59:28.418267 | orchestrator | Sunday 25 May 2025 00:54:05 +0000 (0:00:00.627) 0:07:26.264 ************ 2025-05-25 00:59:28.418271 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418291 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418296 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418300 | orchestrator | 2025-05-25 00:59:28.418303 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-25 00:59:28.418307 | orchestrator | Sunday 25 May 2025 00:54:06 +0000 (0:00:00.345) 0:07:26.609 ************ 2025-05-25 00:59:28.418311 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418315 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418318 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418322 | orchestrator | 2025-05-25 00:59:28.418326 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-25 00:59:28.418333 | orchestrator | Sunday 25 May 2025 00:54:06 +0000 (0:00:00.350) 0:07:26.960 ************ 2025-05-25 00:59:28.418337 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418340 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418344 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418348 | orchestrator | 2025-05-25 00:59:28.418351 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-25 00:59:28.418355 | orchestrator | Sunday 25 May 2025 00:54:06 +0000 (0:00:00.322) 0:07:27.282 ************ 2025-05-25 00:59:28.418359 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418362 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418366 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418370 | orchestrator | 2025-05-25 00:59:28.418374 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-25 00:59:28.418377 | orchestrator | Sunday 25 May 2025 00:54:07 +0000 (0:00:00.601) 0:07:27.884 ************ 2025-05-25 00:59:28.418381 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418385 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418388 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418392 | orchestrator | 2025-05-25 00:59:28.418396 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-25 00:59:28.418400 | orchestrator | Sunday 25 May 2025 00:54:07 +0000 (0:00:00.432) 0:07:28.316 ************ 2025-05-25 00:59:28.418403 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-25 00:59:28.418407 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-25 00:59:28.418411 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418415 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-25 00:59:28.418418 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-25 00:59:28.418422 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418430 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-25 00:59:28.418434 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-25 00:59:28.418437 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418441 | orchestrator | 2025-05-25 00:59:28.418445 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-25 00:59:28.418449 | orchestrator | Sunday 25 May 2025 00:54:08 +0000 (0:00:00.373) 0:07:28.689 ************ 2025-05-25 00:59:28.418452 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-25 00:59:28.418456 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-25 00:59:28.418460 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418464 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-25 00:59:28.418467 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-25 00:59:28.418471 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418475 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-25 00:59:28.418479 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-25 00:59:28.418482 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418486 | orchestrator | 2025-05-25 00:59:28.418490 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-25 00:59:28.418494 | orchestrator | Sunday 25 May 2025 00:54:08 +0000 (0:00:00.370) 0:07:29.060 ************ 2025-05-25 00:59:28.418497 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418501 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418505 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418509 | orchestrator | 2025-05-25 00:59:28.418512 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-25 00:59:28.418516 | orchestrator | Sunday 25 May 2025 00:54:09 +0000 (0:00:00.629) 0:07:29.689 ************ 2025-05-25 00:59:28.418520 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418523 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418527 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418531 | orchestrator | 2025-05-25 00:59:28.418535 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-25 00:59:28.418538 | orchestrator | Sunday 25 May 2025 00:54:09 +0000 (0:00:00.378) 0:07:30.068 ************ 2025-05-25 00:59:28.418542 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418546 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418550 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418553 | orchestrator | 2025-05-25 00:59:28.418557 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-25 00:59:28.418561 | orchestrator | Sunday 25 May 2025 00:54:09 +0000 (0:00:00.345) 0:07:30.413 ************ 2025-05-25 00:59:28.418564 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418568 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418572 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418576 | orchestrator | 2025-05-25 00:59:28.418579 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-25 00:59:28.418583 | orchestrator | Sunday 25 May 2025 00:54:10 +0000 (0:00:00.350) 0:07:30.764 ************ 2025-05-25 00:59:28.418587 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418591 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418594 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418598 | orchestrator | 2025-05-25 00:59:28.418602 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-25 00:59:28.418617 | orchestrator | Sunday 25 May 2025 00:54:10 +0000 (0:00:00.684) 0:07:31.448 ************ 2025-05-25 00:59:28.418621 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418625 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418629 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418632 | orchestrator | 2025-05-25 00:59:28.418639 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-25 00:59:28.418643 | orchestrator | Sunday 25 May 2025 00:54:11 +0000 (0:00:00.373) 0:07:31.822 ************ 2025-05-25 00:59:28.418647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.418651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.418655 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.418660 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418664 | orchestrator | 2025-05-25 00:59:28.418668 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-25 00:59:28.418672 | orchestrator | Sunday 25 May 2025 00:54:11 +0000 (0:00:00.493) 0:07:32.315 ************ 2025-05-25 00:59:28.418675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.418679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.418683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.418687 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418690 | orchestrator | 2025-05-25 00:59:28.418694 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-25 00:59:28.418698 | orchestrator | Sunday 25 May 2025 00:54:12 +0000 (0:00:00.446) 0:07:32.762 ************ 2025-05-25 00:59:28.418701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.418705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.418709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.418712 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418716 | orchestrator | 2025-05-25 00:59:28.418720 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.418723 | orchestrator | Sunday 25 May 2025 00:54:12 +0000 (0:00:00.437) 0:07:33.200 ************ 2025-05-25 00:59:28.418727 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418731 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418735 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418738 | orchestrator | 2025-05-25 00:59:28.418742 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-25 00:59:28.418746 | orchestrator | Sunday 25 May 2025 00:54:12 +0000 (0:00:00.330) 0:07:33.531 ************ 2025-05-25 00:59:28.418749 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-25 00:59:28.418753 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418757 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-25 00:59:28.418760 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418764 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-25 00:59:28.418768 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418771 | orchestrator | 2025-05-25 00:59:28.418775 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-25 00:59:28.418779 | orchestrator | Sunday 25 May 2025 00:54:13 +0000 (0:00:00.787) 0:07:34.319 ************ 2025-05-25 00:59:28.418782 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418786 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418790 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418793 | orchestrator | 2025-05-25 00:59:28.418797 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.418801 | orchestrator | Sunday 25 May 2025 00:54:14 +0000 (0:00:00.385) 0:07:34.704 ************ 2025-05-25 00:59:28.418804 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418808 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418812 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418816 | orchestrator | 2025-05-25 00:59:28.418819 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-25 00:59:28.418823 | orchestrator | Sunday 25 May 2025 00:54:14 +0000 (0:00:00.338) 0:07:35.043 ************ 2025-05-25 00:59:28.418827 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-25 00:59:28.418834 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418840 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-25 00:59:28.418847 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418853 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-25 00:59:28.418859 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418865 | orchestrator | 2025-05-25 00:59:28.418871 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-25 00:59:28.418877 | orchestrator | Sunday 25 May 2025 00:54:15 +0000 (0:00:00.525) 0:07:35.568 ************ 2025-05-25 00:59:28.418883 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.418922 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418926 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.418930 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418934 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.418938 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.418942 | orchestrator | 2025-05-25 00:59:28.418945 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-25 00:59:28.418949 | orchestrator | Sunday 25 May 2025 00:54:15 +0000 (0:00:00.617) 0:07:36.186 ************ 2025-05-25 00:59:28.418953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.418957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.418961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.418979 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-25 00:59:28.418984 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-25 00:59:28.418987 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-25 00:59:28.418991 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.418995 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.418999 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-25 00:59:28.419002 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-25 00:59:28.419006 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-25 00:59:28.419010 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419014 | orchestrator | 2025-05-25 00:59:28.419020 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-25 00:59:28.419024 | orchestrator | Sunday 25 May 2025 00:54:16 +0000 (0:00:00.646) 0:07:36.832 ************ 2025-05-25 00:59:28.419028 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419032 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419035 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419039 | orchestrator | 2025-05-25 00:59:28.419043 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-25 00:59:28.419047 | orchestrator | Sunday 25 May 2025 00:54:17 +0000 (0:00:00.783) 0:07:37.615 ************ 2025-05-25 00:59:28.419051 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 00:59:28.419054 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419058 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-25 00:59:28.419062 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419066 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-25 00:59:28.419069 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419073 | orchestrator | 2025-05-25 00:59:28.419077 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-25 00:59:28.419081 | orchestrator | Sunday 25 May 2025 00:54:17 +0000 (0:00:00.564) 0:07:38.180 ************ 2025-05-25 00:59:28.419085 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419097 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419103 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419109 | orchestrator | 2025-05-25 00:59:28.419115 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-25 00:59:28.419122 | orchestrator | Sunday 25 May 2025 00:54:18 +0000 (0:00:00.771) 0:07:38.951 ************ 2025-05-25 00:59:28.419128 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419135 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419141 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419147 | orchestrator | 2025-05-25 00:59:28.419153 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-25 00:59:28.419159 | orchestrator | Sunday 25 May 2025 00:54:19 +0000 (0:00:00.603) 0:07:39.555 ************ 2025-05-25 00:59:28.419162 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.419166 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.419170 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.419174 | orchestrator | 2025-05-25 00:59:28.419177 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-25 00:59:28.419181 | orchestrator | Sunday 25 May 2025 00:54:19 +0000 (0:00:00.321) 0:07:39.876 ************ 2025-05-25 00:59:28.419185 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-25 00:59:28.419189 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 00:59:28.419192 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 00:59:28.419196 | orchestrator | 2025-05-25 00:59:28.419200 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-25 00:59:28.419203 | orchestrator | Sunday 25 May 2025 00:54:20 +0000 (0:00:01.254) 0:07:41.131 ************ 2025-05-25 00:59:28.419207 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.419211 | orchestrator | 2025-05-25 00:59:28.419215 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-25 00:59:28.419218 | orchestrator | Sunday 25 May 2025 00:54:21 +0000 (0:00:00.578) 0:07:41.710 ************ 2025-05-25 00:59:28.419222 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419226 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419229 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419233 | orchestrator | 2025-05-25 00:59:28.419237 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-25 00:59:28.419240 | orchestrator | Sunday 25 May 2025 00:54:21 +0000 (0:00:00.319) 0:07:42.029 ************ 2025-05-25 00:59:28.419244 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419248 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419252 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419255 | orchestrator | 2025-05-25 00:59:28.419259 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-25 00:59:28.419263 | orchestrator | Sunday 25 May 2025 00:54:22 +0000 (0:00:00.613) 0:07:42.643 ************ 2025-05-25 00:59:28.419266 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419270 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419274 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419277 | orchestrator | 2025-05-25 00:59:28.419281 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-25 00:59:28.419285 | orchestrator | Sunday 25 May 2025 00:54:22 +0000 (0:00:00.316) 0:07:42.959 ************ 2025-05-25 00:59:28.419289 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419292 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419296 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419300 | orchestrator | 2025-05-25 00:59:28.419303 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-25 00:59:28.419307 | orchestrator | Sunday 25 May 2025 00:54:22 +0000 (0:00:00.300) 0:07:43.260 ************ 2025-05-25 00:59:28.419315 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.419319 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.419323 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.419327 | orchestrator | 2025-05-25 00:59:28.419347 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-25 00:59:28.419351 | orchestrator | Sunday 25 May 2025 00:54:23 +0000 (0:00:00.571) 0:07:43.832 ************ 2025-05-25 00:59:28.419355 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.419359 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.419362 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.419366 | orchestrator | 2025-05-25 00:59:28.419370 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-25 00:59:28.419374 | orchestrator | Sunday 25 May 2025 00:54:23 +0000 (0:00:00.574) 0:07:44.406 ************ 2025-05-25 00:59:28.419377 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-25 00:59:28.419385 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-25 00:59:28.419389 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-25 00:59:28.419393 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-25 00:59:28.419397 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-25 00:59:28.419401 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-25 00:59:28.419404 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-25 00:59:28.419408 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-25 00:59:28.419412 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-25 00:59:28.419415 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-25 00:59:28.419419 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-25 00:59:28.419423 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-25 00:59:28.419427 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-25 00:59:28.419430 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-25 00:59:28.419434 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-25 00:59:28.419438 | orchestrator | 2025-05-25 00:59:28.419442 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-25 00:59:28.419445 | orchestrator | Sunday 25 May 2025 00:54:25 +0000 (0:00:01.994) 0:07:46.401 ************ 2025-05-25 00:59:28.419449 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419453 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419457 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419460 | orchestrator | 2025-05-25 00:59:28.419464 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-25 00:59:28.419468 | orchestrator | Sunday 25 May 2025 00:54:26 +0000 (0:00:00.301) 0:07:46.703 ************ 2025-05-25 00:59:28.419471 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.419475 | orchestrator | 2025-05-25 00:59:28.419479 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-25 00:59:28.419483 | orchestrator | Sunday 25 May 2025 00:54:26 +0000 (0:00:00.720) 0:07:47.423 ************ 2025-05-25 00:59:28.419486 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-25 00:59:28.419490 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-25 00:59:28.419494 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-25 00:59:28.419502 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-25 00:59:28.419505 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-25 00:59:28.419509 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-25 00:59:28.419513 | orchestrator | 2025-05-25 00:59:28.419517 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-25 00:59:28.419521 | orchestrator | Sunday 25 May 2025 00:54:27 +0000 (0:00:01.040) 0:07:48.463 ************ 2025-05-25 00:59:28.419524 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 00:59:28.419528 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 00:59:28.419532 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-25 00:59:28.419536 | orchestrator | 2025-05-25 00:59:28.419539 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-25 00:59:28.419543 | orchestrator | Sunday 25 May 2025 00:54:29 +0000 (0:00:01.738) 0:07:50.202 ************ 2025-05-25 00:59:28.419547 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-25 00:59:28.419551 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 00:59:28.419554 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.419558 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-25 00:59:28.419562 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-25 00:59:28.419565 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.419569 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-25 00:59:28.419573 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-25 00:59:28.419577 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.419580 | orchestrator | 2025-05-25 00:59:28.419584 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-25 00:59:28.419588 | orchestrator | Sunday 25 May 2025 00:54:31 +0000 (0:00:01.440) 0:07:51.643 ************ 2025-05-25 00:59:28.419603 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-25 00:59:28.419607 | orchestrator | 2025-05-25 00:59:28.419611 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-25 00:59:28.419615 | orchestrator | Sunday 25 May 2025 00:54:33 +0000 (0:00:02.305) 0:07:53.948 ************ 2025-05-25 00:59:28.419618 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.419622 | orchestrator | 2025-05-25 00:59:28.419626 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-25 00:59:28.419630 | orchestrator | Sunday 25 May 2025 00:54:33 +0000 (0:00:00.575) 0:07:54.524 ************ 2025-05-25 00:59:28.419636 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419640 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419644 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419648 | orchestrator | 2025-05-25 00:59:28.419651 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-25 00:59:28.419655 | orchestrator | Sunday 25 May 2025 00:54:34 +0000 (0:00:00.528) 0:07:55.052 ************ 2025-05-25 00:59:28.419659 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419663 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419666 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419670 | orchestrator | 2025-05-25 00:59:28.419674 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-25 00:59:28.419678 | orchestrator | Sunday 25 May 2025 00:54:34 +0000 (0:00:00.312) 0:07:55.364 ************ 2025-05-25 00:59:28.419682 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419685 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419689 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419693 | orchestrator | 2025-05-25 00:59:28.419696 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-25 00:59:28.419703 | orchestrator | Sunday 25 May 2025 00:54:35 +0000 (0:00:00.319) 0:07:55.684 ************ 2025-05-25 00:59:28.419707 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.419711 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.419715 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.419718 | orchestrator | 2025-05-25 00:59:28.419722 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-25 00:59:28.419726 | orchestrator | Sunday 25 May 2025 00:54:35 +0000 (0:00:00.390) 0:07:56.075 ************ 2025-05-25 00:59:28.419729 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.419733 | orchestrator | 2025-05-25 00:59:28.419737 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-25 00:59:28.419741 | orchestrator | Sunday 25 May 2025 00:54:36 +0000 (0:00:00.813) 0:07:56.888 ************ 2025-05-25 00:59:28.419744 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-91dc6ac0-e554-5716-a575-6858f2de7d62', 'data_vg': 'ceph-91dc6ac0-e554-5716-a575-6858f2de7d62'}) 2025-05-25 00:59:28.419748 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f34e313d-bca1-5ff8-8346-de91d98588f2', 'data_vg': 'ceph-f34e313d-bca1-5ff8-8346-de91d98588f2'}) 2025-05-25 00:59:28.419752 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-86509461-9ff7-5f8d-a545-2dedda0a1471', 'data_vg': 'ceph-86509461-9ff7-5f8d-a545-2dedda0a1471'}) 2025-05-25 00:59:28.419756 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d', 'data_vg': 'ceph-a344b0dc-179a-5809-8fe1-9e4cbc2dd42d'}) 2025-05-25 00:59:28.419760 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a31c7786-f287-566f-81cf-65786b8dbda6', 'data_vg': 'ceph-a31c7786-f287-566f-81cf-65786b8dbda6'}) 2025-05-25 00:59:28.419764 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1f6e0dcd-8614-5501-94b8-6b816e10f3a3', 'data_vg': 'ceph-1f6e0dcd-8614-5501-94b8-6b816e10f3a3'}) 2025-05-25 00:59:28.419767 | orchestrator | 2025-05-25 00:59:28.419771 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-25 00:59:28.419775 | orchestrator | Sunday 25 May 2025 00:55:16 +0000 (0:00:39.722) 0:08:36.611 ************ 2025-05-25 00:59:28.419779 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419782 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419786 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.419790 | orchestrator | 2025-05-25 00:59:28.419794 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-25 00:59:28.419798 | orchestrator | Sunday 25 May 2025 00:55:16 +0000 (0:00:00.464) 0:08:37.076 ************ 2025-05-25 00:59:28.419801 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.419805 | orchestrator | 2025-05-25 00:59:28.419809 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-25 00:59:28.419813 | orchestrator | Sunday 25 May 2025 00:55:17 +0000 (0:00:00.559) 0:08:37.636 ************ 2025-05-25 00:59:28.419816 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.419820 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.419824 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.419828 | orchestrator | 2025-05-25 00:59:28.419831 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-25 00:59:28.419835 | orchestrator | Sunday 25 May 2025 00:55:17 +0000 (0:00:00.634) 0:08:38.270 ************ 2025-05-25 00:59:28.419839 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.419842 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.419846 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.419850 | orchestrator | 2025-05-25 00:59:28.419864 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-25 00:59:28.419869 | orchestrator | Sunday 25 May 2025 00:55:19 +0000 (0:00:01.878) 0:08:40.149 ************ 2025-05-25 00:59:28.419878 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.419882 | orchestrator | 2025-05-25 00:59:28.419900 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-25 00:59:28.419906 | orchestrator | Sunday 25 May 2025 00:55:20 +0000 (0:00:00.532) 0:08:40.681 ************ 2025-05-25 00:59:28.419912 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.419918 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.419924 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.419930 | orchestrator | 2025-05-25 00:59:28.419939 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-25 00:59:28.419943 | orchestrator | Sunday 25 May 2025 00:55:21 +0000 (0:00:01.146) 0:08:41.828 ************ 2025-05-25 00:59:28.419947 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.419951 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.419954 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.419958 | orchestrator | 2025-05-25 00:59:28.419962 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-25 00:59:28.419966 | orchestrator | Sunday 25 May 2025 00:55:22 +0000 (0:00:01.361) 0:08:43.189 ************ 2025-05-25 00:59:28.419969 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.419973 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.419977 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.419980 | orchestrator | 2025-05-25 00:59:28.419984 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-25 00:59:28.419988 | orchestrator | Sunday 25 May 2025 00:55:24 +0000 (0:00:01.651) 0:08:44.841 ************ 2025-05-25 00:59:28.419991 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.419995 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.419999 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420002 | orchestrator | 2025-05-25 00:59:28.420006 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-25 00:59:28.420010 | orchestrator | Sunday 25 May 2025 00:55:24 +0000 (0:00:00.312) 0:08:45.153 ************ 2025-05-25 00:59:28.420014 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420017 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420021 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420025 | orchestrator | 2025-05-25 00:59:28.420028 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-25 00:59:28.420032 | orchestrator | Sunday 25 May 2025 00:55:25 +0000 (0:00:00.552) 0:08:45.705 ************ 2025-05-25 00:59:28.420036 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-25 00:59:28.420039 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-05-25 00:59:28.420043 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-25 00:59:28.420047 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-05-25 00:59:28.420050 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-05-25 00:59:28.420054 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-05-25 00:59:28.420058 | orchestrator | 2025-05-25 00:59:28.420062 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-25 00:59:28.420065 | orchestrator | Sunday 25 May 2025 00:55:26 +0000 (0:00:01.019) 0:08:46.725 ************ 2025-05-25 00:59:28.420069 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-25 00:59:28.420073 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-25 00:59:28.420077 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-25 00:59:28.420080 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-05-25 00:59:28.420084 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-05-25 00:59:28.420088 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-05-25 00:59:28.420091 | orchestrator | 2025-05-25 00:59:28.420095 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-25 00:59:28.420099 | orchestrator | Sunday 25 May 2025 00:55:29 +0000 (0:00:03.389) 0:08:50.114 ************ 2025-05-25 00:59:28.420102 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420111 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420115 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-25 00:59:28.420119 | orchestrator | 2025-05-25 00:59:28.420123 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-25 00:59:28.420126 | orchestrator | Sunday 25 May 2025 00:55:32 +0000 (0:00:02.535) 0:08:52.650 ************ 2025-05-25 00:59:28.420130 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420134 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420137 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-25 00:59:28.420141 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-25 00:59:28.420145 | orchestrator | 2025-05-25 00:59:28.420149 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-25 00:59:28.420152 | orchestrator | Sunday 25 May 2025 00:55:44 +0000 (0:00:12.619) 0:09:05.269 ************ 2025-05-25 00:59:28.420156 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420160 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420163 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420167 | orchestrator | 2025-05-25 00:59:28.420173 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-25 00:59:28.420179 | orchestrator | Sunday 25 May 2025 00:55:45 +0000 (0:00:00.427) 0:09:05.697 ************ 2025-05-25 00:59:28.420185 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420191 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420197 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420203 | orchestrator | 2025-05-25 00:59:28.420209 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-25 00:59:28.420215 | orchestrator | Sunday 25 May 2025 00:55:46 +0000 (0:00:01.093) 0:09:06.790 ************ 2025-05-25 00:59:28.420220 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.420226 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.420232 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.420238 | orchestrator | 2025-05-25 00:59:28.420244 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-25 00:59:28.420268 | orchestrator | Sunday 25 May 2025 00:55:46 +0000 (0:00:00.691) 0:09:07.482 ************ 2025-05-25 00:59:28.420277 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.420281 | orchestrator | 2025-05-25 00:59:28.420285 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-25 00:59:28.420289 | orchestrator | Sunday 25 May 2025 00:55:47 +0000 (0:00:00.785) 0:09:08.268 ************ 2025-05-25 00:59:28.420293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.420296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.420303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.420307 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420310 | orchestrator | 2025-05-25 00:59:28.420314 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-25 00:59:28.420318 | orchestrator | Sunday 25 May 2025 00:55:48 +0000 (0:00:00.459) 0:09:08.728 ************ 2025-05-25 00:59:28.420322 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420325 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420329 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420333 | orchestrator | 2025-05-25 00:59:28.420336 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-25 00:59:28.420340 | orchestrator | Sunday 25 May 2025 00:55:48 +0000 (0:00:00.323) 0:09:09.051 ************ 2025-05-25 00:59:28.420344 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420347 | orchestrator | 2025-05-25 00:59:28.420351 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-25 00:59:28.420355 | orchestrator | Sunday 25 May 2025 00:55:48 +0000 (0:00:00.232) 0:09:09.284 ************ 2025-05-25 00:59:28.420363 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420366 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420370 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420374 | orchestrator | 2025-05-25 00:59:28.420377 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-25 00:59:28.420381 | orchestrator | Sunday 25 May 2025 00:55:49 +0000 (0:00:00.626) 0:09:09.910 ************ 2025-05-25 00:59:28.420385 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420388 | orchestrator | 2025-05-25 00:59:28.420392 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-25 00:59:28.420396 | orchestrator | Sunday 25 May 2025 00:55:49 +0000 (0:00:00.238) 0:09:10.149 ************ 2025-05-25 00:59:28.420400 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420403 | orchestrator | 2025-05-25 00:59:28.420407 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-25 00:59:28.420411 | orchestrator | Sunday 25 May 2025 00:55:49 +0000 (0:00:00.306) 0:09:10.456 ************ 2025-05-25 00:59:28.420414 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420418 | orchestrator | 2025-05-25 00:59:28.420422 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-25 00:59:28.420425 | orchestrator | Sunday 25 May 2025 00:55:50 +0000 (0:00:00.134) 0:09:10.590 ************ 2025-05-25 00:59:28.420429 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420433 | orchestrator | 2025-05-25 00:59:28.420437 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-25 00:59:28.420440 | orchestrator | Sunday 25 May 2025 00:55:50 +0000 (0:00:00.232) 0:09:10.823 ************ 2025-05-25 00:59:28.420444 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420448 | orchestrator | 2025-05-25 00:59:28.420452 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-25 00:59:28.420455 | orchestrator | Sunday 25 May 2025 00:55:50 +0000 (0:00:00.252) 0:09:11.075 ************ 2025-05-25 00:59:28.420459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.420463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.420466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.420470 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420474 | orchestrator | 2025-05-25 00:59:28.420477 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-25 00:59:28.420481 | orchestrator | Sunday 25 May 2025 00:55:50 +0000 (0:00:00.423) 0:09:11.499 ************ 2025-05-25 00:59:28.420485 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420488 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420492 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420496 | orchestrator | 2025-05-25 00:59:28.420499 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-25 00:59:28.420503 | orchestrator | Sunday 25 May 2025 00:55:51 +0000 (0:00:00.314) 0:09:11.813 ************ 2025-05-25 00:59:28.420507 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420510 | orchestrator | 2025-05-25 00:59:28.420514 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-25 00:59:28.420518 | orchestrator | Sunday 25 May 2025 00:55:52 +0000 (0:00:00.849) 0:09:12.662 ************ 2025-05-25 00:59:28.420521 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420525 | orchestrator | 2025-05-25 00:59:28.420529 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-25 00:59:28.420532 | orchestrator | Sunday 25 May 2025 00:55:52 +0000 (0:00:00.220) 0:09:12.883 ************ 2025-05-25 00:59:28.420536 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.420540 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.420543 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.420547 | orchestrator | 2025-05-25 00:59:28.420551 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-25 00:59:28.420558 | orchestrator | 2025-05-25 00:59:28.420562 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-25 00:59:28.420565 | orchestrator | Sunday 25 May 2025 00:55:55 +0000 (0:00:02.947) 0:09:15.830 ************ 2025-05-25 00:59:28.420582 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.420587 | orchestrator | 2025-05-25 00:59:28.420591 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-25 00:59:28.420594 | orchestrator | Sunday 25 May 2025 00:55:56 +0000 (0:00:01.266) 0:09:17.097 ************ 2025-05-25 00:59:28.420598 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420602 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.420605 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420609 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.420613 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420617 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.420620 | orchestrator | 2025-05-25 00:59:28.420624 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-25 00:59:28.420630 | orchestrator | Sunday 25 May 2025 00:55:57 +0000 (0:00:00.713) 0:09:17.811 ************ 2025-05-25 00:59:28.420634 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.420637 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.420641 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.420645 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.420649 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.420652 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.420656 | orchestrator | 2025-05-25 00:59:28.420660 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-25 00:59:28.420663 | orchestrator | Sunday 25 May 2025 00:55:58 +0000 (0:00:01.373) 0:09:19.184 ************ 2025-05-25 00:59:28.420667 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.420671 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.420675 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.420678 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.420682 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.420686 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.420689 | orchestrator | 2025-05-25 00:59:28.420693 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-25 00:59:28.420697 | orchestrator | Sunday 25 May 2025 00:55:59 +0000 (0:00:00.998) 0:09:20.182 ************ 2025-05-25 00:59:28.420701 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.420704 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.420708 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.420712 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.420715 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.420719 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.420723 | orchestrator | 2025-05-25 00:59:28.420727 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-25 00:59:28.420730 | orchestrator | Sunday 25 May 2025 00:56:01 +0000 (0:00:01.389) 0:09:21.571 ************ 2025-05-25 00:59:28.420734 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420738 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.420741 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420745 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420749 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.420752 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.420756 | orchestrator | 2025-05-25 00:59:28.420760 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-25 00:59:28.420764 | orchestrator | Sunday 25 May 2025 00:56:02 +0000 (0:00:01.071) 0:09:22.643 ************ 2025-05-25 00:59:28.420768 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.420771 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.420775 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.420782 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420786 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420790 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420793 | orchestrator | 2025-05-25 00:59:28.420797 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-25 00:59:28.420801 | orchestrator | Sunday 25 May 2025 00:56:02 +0000 (0:00:00.707) 0:09:23.350 ************ 2025-05-25 00:59:28.420805 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.420808 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.420812 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.420816 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420819 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420823 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420827 | orchestrator | 2025-05-25 00:59:28.420830 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-25 00:59:28.420834 | orchestrator | Sunday 25 May 2025 00:56:03 +0000 (0:00:00.766) 0:09:24.116 ************ 2025-05-25 00:59:28.420838 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.420841 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.420845 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.420849 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420852 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420856 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420860 | orchestrator | 2025-05-25 00:59:28.420863 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-25 00:59:28.420867 | orchestrator | Sunday 25 May 2025 00:56:04 +0000 (0:00:00.608) 0:09:24.724 ************ 2025-05-25 00:59:28.420871 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.420875 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.420878 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.420882 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420898 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420902 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420906 | orchestrator | 2025-05-25 00:59:28.420909 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-25 00:59:28.420913 | orchestrator | Sunday 25 May 2025 00:56:04 +0000 (0:00:00.673) 0:09:25.398 ************ 2025-05-25 00:59:28.420917 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.420920 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.420924 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.420928 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.420931 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.420935 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.420939 | orchestrator | 2025-05-25 00:59:28.420942 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-25 00:59:28.420946 | orchestrator | Sunday 25 May 2025 00:56:05 +0000 (0:00:00.557) 0:09:25.955 ************ 2025-05-25 00:59:28.420950 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.420954 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.420969 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.420974 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.420977 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.420981 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.420985 | orchestrator | 2025-05-25 00:59:28.420989 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-25 00:59:28.420992 | orchestrator | Sunday 25 May 2025 00:56:06 +0000 (0:00:01.060) 0:09:27.015 ************ 2025-05-25 00:59:28.420996 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421000 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421004 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421007 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421011 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421015 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421021 | orchestrator | 2025-05-25 00:59:28.421028 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-25 00:59:28.421032 | orchestrator | Sunday 25 May 2025 00:56:07 +0000 (0:00:00.542) 0:09:27.558 ************ 2025-05-25 00:59:28.421035 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.421039 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.421043 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.421047 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421050 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421054 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421058 | orchestrator | 2025-05-25 00:59:28.421061 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-25 00:59:28.421065 | orchestrator | Sunday 25 May 2025 00:56:07 +0000 (0:00:00.814) 0:09:28.373 ************ 2025-05-25 00:59:28.421069 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421072 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421076 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421080 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.421084 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.421087 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.421091 | orchestrator | 2025-05-25 00:59:28.421095 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-25 00:59:28.421098 | orchestrator | Sunday 25 May 2025 00:56:08 +0000 (0:00:00.649) 0:09:29.022 ************ 2025-05-25 00:59:28.421102 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421106 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421110 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421113 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.421117 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.421121 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.421124 | orchestrator | 2025-05-25 00:59:28.421128 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-25 00:59:28.421132 | orchestrator | Sunday 25 May 2025 00:56:09 +0000 (0:00:00.866) 0:09:29.889 ************ 2025-05-25 00:59:28.421136 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421139 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421143 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421147 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.421150 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.421154 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.421158 | orchestrator | 2025-05-25 00:59:28.421161 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-25 00:59:28.421165 | orchestrator | Sunday 25 May 2025 00:56:10 +0000 (0:00:00.742) 0:09:30.631 ************ 2025-05-25 00:59:28.421169 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421172 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421176 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421180 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421183 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421187 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421191 | orchestrator | 2025-05-25 00:59:28.421194 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-25 00:59:28.421198 | orchestrator | Sunday 25 May 2025 00:56:11 +0000 (0:00:00.923) 0:09:31.555 ************ 2025-05-25 00:59:28.421202 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421206 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421209 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421213 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421217 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421220 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421224 | orchestrator | 2025-05-25 00:59:28.421228 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-25 00:59:28.421231 | orchestrator | Sunday 25 May 2025 00:56:11 +0000 (0:00:00.705) 0:09:32.260 ************ 2025-05-25 00:59:28.421238 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.421242 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.421246 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.421250 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421253 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421257 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421261 | orchestrator | 2025-05-25 00:59:28.421264 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-25 00:59:28.421268 | orchestrator | Sunday 25 May 2025 00:56:12 +0000 (0:00:00.756) 0:09:33.017 ************ 2025-05-25 00:59:28.421272 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.421276 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.421279 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.421283 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.421287 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.421290 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.421294 | orchestrator | 2025-05-25 00:59:28.421298 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-25 00:59:28.421301 | orchestrator | Sunday 25 May 2025 00:56:12 +0000 (0:00:00.476) 0:09:33.494 ************ 2025-05-25 00:59:28.421305 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421309 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421313 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421316 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421320 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421324 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421327 | orchestrator | 2025-05-25 00:59:28.421331 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-25 00:59:28.421335 | orchestrator | Sunday 25 May 2025 00:56:13 +0000 (0:00:00.603) 0:09:34.098 ************ 2025-05-25 00:59:28.421349 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421353 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421357 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421361 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421364 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421368 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421372 | orchestrator | 2025-05-25 00:59:28.421375 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-25 00:59:28.421379 | orchestrator | Sunday 25 May 2025 00:56:14 +0000 (0:00:00.512) 0:09:34.611 ************ 2025-05-25 00:59:28.421383 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421387 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421390 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421394 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421400 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421404 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421407 | orchestrator | 2025-05-25 00:59:28.421411 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-25 00:59:28.421415 | orchestrator | Sunday 25 May 2025 00:56:14 +0000 (0:00:00.689) 0:09:35.300 ************ 2025-05-25 00:59:28.421419 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421422 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421426 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421430 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421433 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421437 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421441 | orchestrator | 2025-05-25 00:59:28.421444 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-25 00:59:28.421448 | orchestrator | Sunday 25 May 2025 00:56:15 +0000 (0:00:00.576) 0:09:35.876 ************ 2025-05-25 00:59:28.421452 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421456 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421459 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421463 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421470 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421473 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421477 | orchestrator | 2025-05-25 00:59:28.421481 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-25 00:59:28.421485 | orchestrator | Sunday 25 May 2025 00:56:16 +0000 (0:00:00.716) 0:09:36.592 ************ 2025-05-25 00:59:28.421488 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421492 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421496 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421499 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421503 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421507 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421510 | orchestrator | 2025-05-25 00:59:28.421514 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-25 00:59:28.421518 | orchestrator | Sunday 25 May 2025 00:56:16 +0000 (0:00:00.563) 0:09:37.156 ************ 2025-05-25 00:59:28.421521 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421525 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421529 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421532 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421536 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421540 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421544 | orchestrator | 2025-05-25 00:59:28.421547 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-25 00:59:28.421551 | orchestrator | Sunday 25 May 2025 00:56:17 +0000 (0:00:00.684) 0:09:37.840 ************ 2025-05-25 00:59:28.421555 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421558 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421562 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421566 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421569 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421573 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421577 | orchestrator | 2025-05-25 00:59:28.421581 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-25 00:59:28.421585 | orchestrator | Sunday 25 May 2025 00:56:17 +0000 (0:00:00.610) 0:09:38.450 ************ 2025-05-25 00:59:28.421588 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421592 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421596 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421599 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421603 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421607 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421610 | orchestrator | 2025-05-25 00:59:28.421614 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-25 00:59:28.421618 | orchestrator | Sunday 25 May 2025 00:56:18 +0000 (0:00:00.926) 0:09:39.377 ************ 2025-05-25 00:59:28.421622 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421625 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421629 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421633 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421636 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421640 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421644 | orchestrator | 2025-05-25 00:59:28.421647 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-25 00:59:28.421651 | orchestrator | Sunday 25 May 2025 00:56:19 +0000 (0:00:00.766) 0:09:40.144 ************ 2025-05-25 00:59:28.421655 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421658 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421662 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421666 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421669 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421676 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421679 | orchestrator | 2025-05-25 00:59:28.421683 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-25 00:59:28.421687 | orchestrator | Sunday 25 May 2025 00:56:20 +0000 (0:00:00.870) 0:09:41.014 ************ 2025-05-25 00:59:28.421690 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421694 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421698 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421712 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421716 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421720 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421724 | orchestrator | 2025-05-25 00:59:28.421727 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-25 00:59:28.421731 | orchestrator | Sunday 25 May 2025 00:56:21 +0000 (0:00:00.649) 0:09:41.664 ************ 2025-05-25 00:59:28.421735 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-25 00:59:28.421739 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-25 00:59:28.421742 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421746 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-25 00:59:28.421750 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-25 00:59:28.421756 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421759 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-25 00:59:28.421763 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-25 00:59:28.421767 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421771 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-25 00:59:28.421775 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-25 00:59:28.421778 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421782 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-25 00:59:28.421786 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-25 00:59:28.421789 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421793 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-25 00:59:28.421797 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-25 00:59:28.421800 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421804 | orchestrator | 2025-05-25 00:59:28.421808 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-25 00:59:28.421811 | orchestrator | Sunday 25 May 2025 00:56:22 +0000 (0:00:00.929) 0:09:42.594 ************ 2025-05-25 00:59:28.421815 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-25 00:59:28.421819 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-25 00:59:28.421822 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421826 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-25 00:59:28.421830 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-25 00:59:28.421834 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421837 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-25 00:59:28.421841 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-25 00:59:28.421845 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421848 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-25 00:59:28.421852 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-25 00:59:28.421856 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421860 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-25 00:59:28.421863 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-25 00:59:28.421867 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421871 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-25 00:59:28.421874 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-25 00:59:28.421878 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421899 | orchestrator | 2025-05-25 00:59:28.421903 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-25 00:59:28.421907 | orchestrator | Sunday 25 May 2025 00:56:22 +0000 (0:00:00.725) 0:09:43.319 ************ 2025-05-25 00:59:28.421911 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421914 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421918 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421922 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421926 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421929 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421933 | orchestrator | 2025-05-25 00:59:28.421937 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-25 00:59:28.421940 | orchestrator | Sunday 25 May 2025 00:56:23 +0000 (0:00:00.919) 0:09:44.238 ************ 2025-05-25 00:59:28.421944 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421948 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421951 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421955 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421959 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421962 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421966 | orchestrator | 2025-05-25 00:59:28.421970 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-25 00:59:28.421974 | orchestrator | Sunday 25 May 2025 00:56:24 +0000 (0:00:00.684) 0:09:44.923 ************ 2025-05-25 00:59:28.421977 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.421981 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.421985 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.421988 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.421992 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.421996 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.421999 | orchestrator | 2025-05-25 00:59:28.422003 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-25 00:59:28.422007 | orchestrator | Sunday 25 May 2025 00:56:25 +0000 (0:00:00.922) 0:09:45.845 ************ 2025-05-25 00:59:28.422010 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422028 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422032 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422035 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422039 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422042 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422046 | orchestrator | 2025-05-25 00:59:28.422050 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-25 00:59:28.422054 | orchestrator | Sunday 25 May 2025 00:56:26 +0000 (0:00:00.746) 0:09:46.592 ************ 2025-05-25 00:59:28.422070 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422074 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422078 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422082 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422085 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422089 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422093 | orchestrator | 2025-05-25 00:59:28.422096 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-25 00:59:28.422100 | orchestrator | Sunday 25 May 2025 00:56:26 +0000 (0:00:00.907) 0:09:47.500 ************ 2025-05-25 00:59:28.422104 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422108 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422111 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422118 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422122 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422125 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422129 | orchestrator | 2025-05-25 00:59:28.422133 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-25 00:59:28.422140 | orchestrator | Sunday 25 May 2025 00:56:27 +0000 (0:00:00.697) 0:09:48.198 ************ 2025-05-25 00:59:28.422144 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.422148 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.422151 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.422155 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422159 | orchestrator | 2025-05-25 00:59:28.422163 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-25 00:59:28.422166 | orchestrator | Sunday 25 May 2025 00:56:28 +0000 (0:00:00.523) 0:09:48.721 ************ 2025-05-25 00:59:28.422170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.422174 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.422177 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.422181 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422185 | orchestrator | 2025-05-25 00:59:28.422189 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-25 00:59:28.422192 | orchestrator | Sunday 25 May 2025 00:56:28 +0000 (0:00:00.441) 0:09:49.162 ************ 2025-05-25 00:59:28.422196 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.422200 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.422203 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.422207 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422211 | orchestrator | 2025-05-25 00:59:28.422216 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.422222 | orchestrator | Sunday 25 May 2025 00:56:29 +0000 (0:00:00.441) 0:09:49.603 ************ 2025-05-25 00:59:28.422228 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422234 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422240 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422246 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422251 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422257 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422263 | orchestrator | 2025-05-25 00:59:28.422268 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-25 00:59:28.422274 | orchestrator | Sunday 25 May 2025 00:56:29 +0000 (0:00:00.901) 0:09:50.505 ************ 2025-05-25 00:59:28.422280 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-25 00:59:28.422285 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422291 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-25 00:59:28.422297 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422303 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-25 00:59:28.422309 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422315 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-25 00:59:28.422320 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422326 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-25 00:59:28.422332 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422337 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-25 00:59:28.422343 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422348 | orchestrator | 2025-05-25 00:59:28.422354 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-25 00:59:28.422361 | orchestrator | Sunday 25 May 2025 00:56:30 +0000 (0:00:00.854) 0:09:51.359 ************ 2025-05-25 00:59:28.422366 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422369 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422373 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422377 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422381 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422384 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422392 | orchestrator | 2025-05-25 00:59:28.422395 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.422399 | orchestrator | Sunday 25 May 2025 00:56:31 +0000 (0:00:00.971) 0:09:52.331 ************ 2025-05-25 00:59:28.422403 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422407 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422410 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422414 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422418 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422421 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422425 | orchestrator | 2025-05-25 00:59:28.422429 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-25 00:59:28.422433 | orchestrator | Sunday 25 May 2025 00:56:32 +0000 (0:00:00.807) 0:09:53.138 ************ 2025-05-25 00:59:28.422436 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-25 00:59:28.422440 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-25 00:59:28.422444 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422448 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422451 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-25 00:59:28.422455 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422474 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-25 00:59:28.422478 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422482 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-25 00:59:28.422485 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422491 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-25 00:59:28.422497 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422503 | orchestrator | 2025-05-25 00:59:28.422509 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-25 00:59:28.422515 | orchestrator | Sunday 25 May 2025 00:56:33 +0000 (0:00:01.238) 0:09:54.377 ************ 2025-05-25 00:59:28.422524 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422531 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422541 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422547 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.422553 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422559 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.422565 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422571 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.422576 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422583 | orchestrator | 2025-05-25 00:59:28.422588 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-25 00:59:28.422594 | orchestrator | Sunday 25 May 2025 00:56:34 +0000 (0:00:00.855) 0:09:55.233 ************ 2025-05-25 00:59:28.422600 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-25 00:59:28.422606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-25 00:59:28.422611 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-25 00:59:28.422617 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-25 00:59:28.422623 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-25 00:59:28.422629 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-25 00:59:28.422635 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422641 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-25 00:59:28.422647 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-25 00:59:28.422652 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-25 00:59:28.422665 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.422673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.422677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.422680 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422684 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-25 00:59:28.422688 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-25 00:59:28.422691 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-25 00:59:28.422695 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422699 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422702 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-25 00:59:28.422706 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-25 00:59:28.422710 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-25 00:59:28.422713 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422717 | orchestrator | 2025-05-25 00:59:28.422721 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-25 00:59:28.422724 | orchestrator | Sunday 25 May 2025 00:56:36 +0000 (0:00:01.499) 0:09:56.733 ************ 2025-05-25 00:59:28.422728 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422732 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422736 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422739 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422743 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422747 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422750 | orchestrator | 2025-05-25 00:59:28.422754 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-25 00:59:28.422758 | orchestrator | Sunday 25 May 2025 00:56:37 +0000 (0:00:01.146) 0:09:57.879 ************ 2025-05-25 00:59:28.422761 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422765 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422769 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422772 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 00:59:28.422776 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422780 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-25 00:59:28.422783 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422787 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-25 00:59:28.422791 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422794 | orchestrator | 2025-05-25 00:59:28.422798 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-25 00:59:28.422802 | orchestrator | Sunday 25 May 2025 00:56:38 +0000 (0:00:01.102) 0:09:58.981 ************ 2025-05-25 00:59:28.422806 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422809 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422813 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422816 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422820 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422824 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422827 | orchestrator | 2025-05-25 00:59:28.422831 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-25 00:59:28.422838 | orchestrator | Sunday 25 May 2025 00:56:39 +0000 (0:00:01.199) 0:10:00.180 ************ 2025-05-25 00:59:28.422842 | orchestrator | skipping: [testbed-node-0] 2025-05-25 00:59:28.422846 | orchestrator | skipping: [testbed-node-1] 2025-05-25 00:59:28.422850 | orchestrator | skipping: [testbed-node-2] 2025-05-25 00:59:28.422853 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.422857 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.422861 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.422864 | orchestrator | 2025-05-25 00:59:28.422871 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-25 00:59:28.422875 | orchestrator | Sunday 25 May 2025 00:56:41 +0000 (0:00:01.632) 0:10:01.813 ************ 2025-05-25 00:59:28.422878 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.422882 | orchestrator | 2025-05-25 00:59:28.422914 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-25 00:59:28.422919 | orchestrator | Sunday 25 May 2025 00:56:44 +0000 (0:00:03.257) 0:10:05.070 ************ 2025-05-25 00:59:28.422922 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.422926 | orchestrator | 2025-05-25 00:59:28.422930 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-25 00:59:28.422934 | orchestrator | Sunday 25 May 2025 00:56:46 +0000 (0:00:01.650) 0:10:06.721 ************ 2025-05-25 00:59:28.422938 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.422941 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.422945 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.422949 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.422953 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.422956 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.422960 | orchestrator | 2025-05-25 00:59:28.422964 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-25 00:59:28.422967 | orchestrator | Sunday 25 May 2025 00:56:47 +0000 (0:00:01.363) 0:10:08.085 ************ 2025-05-25 00:59:28.422971 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.422975 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.422978 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.422982 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.422986 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.422989 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.422993 | orchestrator | 2025-05-25 00:59:28.422997 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-25 00:59:28.423001 | orchestrator | Sunday 25 May 2025 00:56:48 +0000 (0:00:01.025) 0:10:09.110 ************ 2025-05-25 00:59:28.423004 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.423009 | orchestrator | 2025-05-25 00:59:28.423013 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-25 00:59:28.423017 | orchestrator | Sunday 25 May 2025 00:56:49 +0000 (0:00:01.250) 0:10:10.361 ************ 2025-05-25 00:59:28.423020 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.423024 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.423028 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.423032 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.423035 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.423039 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.423043 | orchestrator | 2025-05-25 00:59:28.423046 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-25 00:59:28.423050 | orchestrator | Sunday 25 May 2025 00:56:51 +0000 (0:00:01.499) 0:10:11.860 ************ 2025-05-25 00:59:28.423096 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.423109 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.423112 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.423116 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.423120 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.423124 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.423127 | orchestrator | 2025-05-25 00:59:28.423131 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-25 00:59:28.423135 | orchestrator | Sunday 25 May 2025 00:56:55 +0000 (0:00:04.218) 0:10:16.079 ************ 2025-05-25 00:59:28.423139 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.423143 | orchestrator | 2025-05-25 00:59:28.423150 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-25 00:59:28.423154 | orchestrator | Sunday 25 May 2025 00:56:56 +0000 (0:00:01.292) 0:10:17.371 ************ 2025-05-25 00:59:28.423157 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.423161 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.423165 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.423169 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.423172 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.423176 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.423180 | orchestrator | 2025-05-25 00:59:28.423183 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-25 00:59:28.423187 | orchestrator | Sunday 25 May 2025 00:56:57 +0000 (0:00:00.666) 0:10:18.037 ************ 2025-05-25 00:59:28.423191 | orchestrator | changed: [testbed-node-0] 2025-05-25 00:59:28.423195 | orchestrator | changed: [testbed-node-1] 2025-05-25 00:59:28.423198 | orchestrator | changed: [testbed-node-2] 2025-05-25 00:59:28.423202 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.423206 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.423209 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.423213 | orchestrator | 2025-05-25 00:59:28.423217 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-25 00:59:28.423220 | orchestrator | Sunday 25 May 2025 00:56:59 +0000 (0:00:02.489) 0:10:20.527 ************ 2025-05-25 00:59:28.423224 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:28.423228 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:28.423231 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:28.423235 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.423239 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.423242 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.423246 | orchestrator | 2025-05-25 00:59:28.423250 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-25 00:59:28.423253 | orchestrator | 2025-05-25 00:59:28.423257 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-25 00:59:28.423266 | orchestrator | Sunday 25 May 2025 00:57:02 +0000 (0:00:02.490) 0:10:23.017 ************ 2025-05-25 00:59:28.423270 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.423273 | orchestrator | 2025-05-25 00:59:28.423277 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-25 00:59:28.423281 | orchestrator | Sunday 25 May 2025 00:57:02 +0000 (0:00:00.530) 0:10:23.548 ************ 2025-05-25 00:59:28.423284 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423288 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423292 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423296 | orchestrator | 2025-05-25 00:59:28.423302 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-25 00:59:28.423305 | orchestrator | Sunday 25 May 2025 00:57:03 +0000 (0:00:00.542) 0:10:24.090 ************ 2025-05-25 00:59:28.423309 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.423313 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.423317 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.423321 | orchestrator | 2025-05-25 00:59:28.423325 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-25 00:59:28.423328 | orchestrator | Sunday 25 May 2025 00:57:04 +0000 (0:00:00.721) 0:10:24.812 ************ 2025-05-25 00:59:28.423332 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.423336 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.423339 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.423343 | orchestrator | 2025-05-25 00:59:28.423347 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-25 00:59:28.423351 | orchestrator | Sunday 25 May 2025 00:57:04 +0000 (0:00:00.689) 0:10:25.501 ************ 2025-05-25 00:59:28.423354 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.423358 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.423364 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.423368 | orchestrator | 2025-05-25 00:59:28.423372 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-25 00:59:28.423376 | orchestrator | Sunday 25 May 2025 00:57:05 +0000 (0:00:00.931) 0:10:26.433 ************ 2025-05-25 00:59:28.423380 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423383 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423387 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423391 | orchestrator | 2025-05-25 00:59:28.423394 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-25 00:59:28.423398 | orchestrator | Sunday 25 May 2025 00:57:06 +0000 (0:00:00.328) 0:10:26.762 ************ 2025-05-25 00:59:28.423402 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423406 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423409 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423413 | orchestrator | 2025-05-25 00:59:28.423417 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-25 00:59:28.423421 | orchestrator | Sunday 25 May 2025 00:57:06 +0000 (0:00:00.317) 0:10:27.080 ************ 2025-05-25 00:59:28.423424 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423428 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423432 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423435 | orchestrator | 2025-05-25 00:59:28.423439 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-25 00:59:28.423443 | orchestrator | Sunday 25 May 2025 00:57:06 +0000 (0:00:00.317) 0:10:27.397 ************ 2025-05-25 00:59:28.423447 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423450 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423454 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423458 | orchestrator | 2025-05-25 00:59:28.423462 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-25 00:59:28.423465 | orchestrator | Sunday 25 May 2025 00:57:07 +0000 (0:00:00.546) 0:10:27.944 ************ 2025-05-25 00:59:28.423469 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423473 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423476 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423480 | orchestrator | 2025-05-25 00:59:28.423484 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-25 00:59:28.423488 | orchestrator | Sunday 25 May 2025 00:57:07 +0000 (0:00:00.310) 0:10:28.254 ************ 2025-05-25 00:59:28.423491 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423495 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423499 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423502 | orchestrator | 2025-05-25 00:59:28.423506 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-25 00:59:28.423510 | orchestrator | Sunday 25 May 2025 00:57:08 +0000 (0:00:00.325) 0:10:28.579 ************ 2025-05-25 00:59:28.423514 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.423517 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.423521 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.423525 | orchestrator | 2025-05-25 00:59:28.423528 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-25 00:59:28.423532 | orchestrator | Sunday 25 May 2025 00:57:08 +0000 (0:00:00.701) 0:10:29.281 ************ 2025-05-25 00:59:28.423536 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423539 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423543 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423547 | orchestrator | 2025-05-25 00:59:28.423550 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-25 00:59:28.423554 | orchestrator | Sunday 25 May 2025 00:57:09 +0000 (0:00:00.542) 0:10:29.823 ************ 2025-05-25 00:59:28.423558 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423562 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423565 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423573 | orchestrator | 2025-05-25 00:59:28.423577 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-25 00:59:28.423580 | orchestrator | Sunday 25 May 2025 00:57:09 +0000 (0:00:00.331) 0:10:30.155 ************ 2025-05-25 00:59:28.423584 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.423588 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.423592 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.423595 | orchestrator | 2025-05-25 00:59:28.423599 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-25 00:59:28.423605 | orchestrator | Sunday 25 May 2025 00:57:09 +0000 (0:00:00.356) 0:10:30.511 ************ 2025-05-25 00:59:28.423609 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.423613 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.423617 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.423620 | orchestrator | 2025-05-25 00:59:28.423624 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-25 00:59:28.423628 | orchestrator | Sunday 25 May 2025 00:57:10 +0000 (0:00:00.349) 0:10:30.860 ************ 2025-05-25 00:59:28.423632 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.423635 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.423639 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.423643 | orchestrator | 2025-05-25 00:59:28.423647 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-25 00:59:28.423652 | orchestrator | Sunday 25 May 2025 00:57:10 +0000 (0:00:00.588) 0:10:31.449 ************ 2025-05-25 00:59:28.423656 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423660 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423664 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423668 | orchestrator | 2025-05-25 00:59:28.423671 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-25 00:59:28.423675 | orchestrator | Sunday 25 May 2025 00:57:11 +0000 (0:00:00.335) 0:10:31.785 ************ 2025-05-25 00:59:28.423679 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423683 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423686 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423690 | orchestrator | 2025-05-25 00:59:28.423695 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-25 00:59:28.423699 | orchestrator | Sunday 25 May 2025 00:57:11 +0000 (0:00:00.316) 0:10:32.101 ************ 2025-05-25 00:59:28.423702 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423706 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423710 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423713 | orchestrator | 2025-05-25 00:59:28.423717 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-25 00:59:28.423721 | orchestrator | Sunday 25 May 2025 00:57:11 +0000 (0:00:00.305) 0:10:32.406 ************ 2025-05-25 00:59:28.423724 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.423728 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.423732 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.423736 | orchestrator | 2025-05-25 00:59:28.423739 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-25 00:59:28.423743 | orchestrator | Sunday 25 May 2025 00:57:12 +0000 (0:00:00.595) 0:10:33.001 ************ 2025-05-25 00:59:28.423747 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423751 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423754 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423758 | orchestrator | 2025-05-25 00:59:28.423762 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-25 00:59:28.423766 | orchestrator | Sunday 25 May 2025 00:57:12 +0000 (0:00:00.320) 0:10:33.322 ************ 2025-05-25 00:59:28.423769 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423773 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423777 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423780 | orchestrator | 2025-05-25 00:59:28.423784 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-25 00:59:28.423791 | orchestrator | Sunday 25 May 2025 00:57:13 +0000 (0:00:00.391) 0:10:33.714 ************ 2025-05-25 00:59:28.423795 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423799 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423803 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423806 | orchestrator | 2025-05-25 00:59:28.423810 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-25 00:59:28.423814 | orchestrator | Sunday 25 May 2025 00:57:13 +0000 (0:00:00.325) 0:10:34.039 ************ 2025-05-25 00:59:28.423818 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423821 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423825 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423829 | orchestrator | 2025-05-25 00:59:28.423832 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-25 00:59:28.423836 | orchestrator | Sunday 25 May 2025 00:57:14 +0000 (0:00:00.658) 0:10:34.698 ************ 2025-05-25 00:59:28.423840 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423844 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423847 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423851 | orchestrator | 2025-05-25 00:59:28.423855 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-25 00:59:28.423858 | orchestrator | Sunday 25 May 2025 00:57:14 +0000 (0:00:00.315) 0:10:35.013 ************ 2025-05-25 00:59:28.423862 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423866 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423869 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423873 | orchestrator | 2025-05-25 00:59:28.423877 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-25 00:59:28.423881 | orchestrator | Sunday 25 May 2025 00:57:14 +0000 (0:00:00.339) 0:10:35.353 ************ 2025-05-25 00:59:28.423894 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423898 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423902 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423906 | orchestrator | 2025-05-25 00:59:28.423910 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-25 00:59:28.423913 | orchestrator | Sunday 25 May 2025 00:57:15 +0000 (0:00:00.381) 0:10:35.734 ************ 2025-05-25 00:59:28.423917 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423921 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423925 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423928 | orchestrator | 2025-05-25 00:59:28.423932 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-25 00:59:28.423936 | orchestrator | Sunday 25 May 2025 00:57:15 +0000 (0:00:00.388) 0:10:36.123 ************ 2025-05-25 00:59:28.423940 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423943 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423947 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423951 | orchestrator | 2025-05-25 00:59:28.423957 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-25 00:59:28.423960 | orchestrator | Sunday 25 May 2025 00:57:16 +0000 (0:00:00.697) 0:10:36.820 ************ 2025-05-25 00:59:28.423964 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423968 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423972 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.423975 | orchestrator | 2025-05-25 00:59:28.423979 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-25 00:59:28.423983 | orchestrator | Sunday 25 May 2025 00:57:16 +0000 (0:00:00.362) 0:10:37.183 ************ 2025-05-25 00:59:28.423987 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.423993 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.423997 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424000 | orchestrator | 2025-05-25 00:59:28.424007 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-25 00:59:28.424011 | orchestrator | Sunday 25 May 2025 00:57:16 +0000 (0:00:00.318) 0:10:37.502 ************ 2025-05-25 00:59:28.424015 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424019 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424022 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424026 | orchestrator | 2025-05-25 00:59:28.424030 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-25 00:59:28.424034 | orchestrator | Sunday 25 May 2025 00:57:17 +0000 (0:00:00.302) 0:10:37.804 ************ 2025-05-25 00:59:28.424038 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-25 00:59:28.424042 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-25 00:59:28.424046 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424049 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-25 00:59:28.424053 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-25 00:59:28.424057 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424060 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-25 00:59:28.424064 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-25 00:59:28.424068 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424071 | orchestrator | 2025-05-25 00:59:28.424075 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-25 00:59:28.424079 | orchestrator | Sunday 25 May 2025 00:57:17 +0000 (0:00:00.516) 0:10:38.321 ************ 2025-05-25 00:59:28.424083 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-25 00:59:28.424086 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-25 00:59:28.424090 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424094 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-25 00:59:28.424098 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-25 00:59:28.424101 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424105 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-25 00:59:28.424109 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-25 00:59:28.424113 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424116 | orchestrator | 2025-05-25 00:59:28.424120 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-25 00:59:28.424124 | orchestrator | Sunday 25 May 2025 00:57:18 +0000 (0:00:00.322) 0:10:38.644 ************ 2025-05-25 00:59:28.424127 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424131 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424135 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424139 | orchestrator | 2025-05-25 00:59:28.424142 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-25 00:59:28.424146 | orchestrator | Sunday 25 May 2025 00:57:18 +0000 (0:00:00.308) 0:10:38.952 ************ 2025-05-25 00:59:28.424150 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424154 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424158 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424161 | orchestrator | 2025-05-25 00:59:28.424165 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-25 00:59:28.424169 | orchestrator | Sunday 25 May 2025 00:57:18 +0000 (0:00:00.294) 0:10:39.246 ************ 2025-05-25 00:59:28.424173 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424176 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424180 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424184 | orchestrator | 2025-05-25 00:59:28.424187 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-25 00:59:28.424191 | orchestrator | Sunday 25 May 2025 00:57:19 +0000 (0:00:00.479) 0:10:39.726 ************ 2025-05-25 00:59:28.424195 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424201 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424205 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424209 | orchestrator | 2025-05-25 00:59:28.424212 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-25 00:59:28.424216 | orchestrator | Sunday 25 May 2025 00:57:19 +0000 (0:00:00.293) 0:10:40.019 ************ 2025-05-25 00:59:28.424220 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424224 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424227 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424231 | orchestrator | 2025-05-25 00:59:28.424235 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-25 00:59:28.424239 | orchestrator | Sunday 25 May 2025 00:57:19 +0000 (0:00:00.337) 0:10:40.357 ************ 2025-05-25 00:59:28.424242 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424246 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424250 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424253 | orchestrator | 2025-05-25 00:59:28.424257 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-25 00:59:28.424261 | orchestrator | Sunday 25 May 2025 00:57:20 +0000 (0:00:00.285) 0:10:40.642 ************ 2025-05-25 00:59:28.424264 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.424270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.424274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.424277 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424281 | orchestrator | 2025-05-25 00:59:28.424285 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-25 00:59:28.424289 | orchestrator | Sunday 25 May 2025 00:57:20 +0000 (0:00:00.638) 0:10:41.280 ************ 2025-05-25 00:59:28.424292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.424296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.424300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.424303 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424307 | orchestrator | 2025-05-25 00:59:28.424313 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-25 00:59:28.424317 | orchestrator | Sunday 25 May 2025 00:57:21 +0000 (0:00:00.368) 0:10:41.649 ************ 2025-05-25 00:59:28.424320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.424324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.424328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.424331 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424335 | orchestrator | 2025-05-25 00:59:28.424339 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.424343 | orchestrator | Sunday 25 May 2025 00:57:21 +0000 (0:00:00.355) 0:10:42.005 ************ 2025-05-25 00:59:28.424346 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424350 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424354 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424357 | orchestrator | 2025-05-25 00:59:28.424361 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-25 00:59:28.424365 | orchestrator | Sunday 25 May 2025 00:57:21 +0000 (0:00:00.265) 0:10:42.270 ************ 2025-05-25 00:59:28.424368 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-25 00:59:28.424372 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424376 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-25 00:59:28.424379 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424383 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-25 00:59:28.424387 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424391 | orchestrator | 2025-05-25 00:59:28.424395 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-25 00:59:28.424401 | orchestrator | Sunday 25 May 2025 00:57:22 +0000 (0:00:00.367) 0:10:42.638 ************ 2025-05-25 00:59:28.424404 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424408 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424412 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424415 | orchestrator | 2025-05-25 00:59:28.424419 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.424423 | orchestrator | Sunday 25 May 2025 00:57:22 +0000 (0:00:00.446) 0:10:43.084 ************ 2025-05-25 00:59:28.424427 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424430 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424434 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424438 | orchestrator | 2025-05-25 00:59:28.424441 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-25 00:59:28.424445 | orchestrator | Sunday 25 May 2025 00:57:22 +0000 (0:00:00.287) 0:10:43.372 ************ 2025-05-25 00:59:28.424449 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-25 00:59:28.424452 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424456 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-25 00:59:28.424460 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424463 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-25 00:59:28.424467 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424471 | orchestrator | 2025-05-25 00:59:28.424475 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-25 00:59:28.424478 | orchestrator | Sunday 25 May 2025 00:57:23 +0000 (0:00:00.409) 0:10:43.781 ************ 2025-05-25 00:59:28.424482 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.424486 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424490 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.424493 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424497 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.424501 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424505 | orchestrator | 2025-05-25 00:59:28.424508 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-25 00:59:28.424512 | orchestrator | Sunday 25 May 2025 00:57:23 +0000 (0:00:00.299) 0:10:44.081 ************ 2025-05-25 00:59:28.424516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.424520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.424523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.424527 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-25 00:59:28.424531 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-25 00:59:28.424534 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-25 00:59:28.424538 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424542 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424546 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-25 00:59:28.424549 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-25 00:59:28.424555 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-25 00:59:28.424558 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424562 | orchestrator | 2025-05-25 00:59:28.424566 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-25 00:59:28.424570 | orchestrator | Sunday 25 May 2025 00:57:24 +0000 (0:00:00.683) 0:10:44.764 ************ 2025-05-25 00:59:28.424573 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424577 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424593 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424597 | orchestrator | 2025-05-25 00:59:28.424601 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-25 00:59:28.424605 | orchestrator | Sunday 25 May 2025 00:57:24 +0000 (0:00:00.547) 0:10:45.311 ************ 2025-05-25 00:59:28.424610 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 00:59:28.424614 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424618 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-25 00:59:28.424621 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424625 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-25 00:59:28.424629 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424633 | orchestrator | 2025-05-25 00:59:28.424636 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-25 00:59:28.424640 | orchestrator | Sunday 25 May 2025 00:57:25 +0000 (0:00:00.804) 0:10:46.116 ************ 2025-05-25 00:59:28.424644 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424648 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424651 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424655 | orchestrator | 2025-05-25 00:59:28.424659 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-25 00:59:28.424663 | orchestrator | Sunday 25 May 2025 00:57:26 +0000 (0:00:00.554) 0:10:46.670 ************ 2025-05-25 00:59:28.424667 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424670 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424674 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424678 | orchestrator | 2025-05-25 00:59:28.424681 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-25 00:59:28.424685 | orchestrator | Sunday 25 May 2025 00:57:26 +0000 (0:00:00.762) 0:10:47.433 ************ 2025-05-25 00:59:28.424689 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424693 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424696 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-25 00:59:28.424700 | orchestrator | 2025-05-25 00:59:28.424704 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-25 00:59:28.424707 | orchestrator | Sunday 25 May 2025 00:57:27 +0000 (0:00:00.427) 0:10:47.861 ************ 2025-05-25 00:59:28.424711 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-25 00:59:28.424715 | orchestrator | 2025-05-25 00:59:28.424718 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-25 00:59:28.424722 | orchestrator | Sunday 25 May 2025 00:57:29 +0000 (0:00:01.716) 0:10:49.578 ************ 2025-05-25 00:59:28.424727 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-25 00:59:28.424732 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424736 | orchestrator | 2025-05-25 00:59:28.424739 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-25 00:59:28.424743 | orchestrator | Sunday 25 May 2025 00:57:29 +0000 (0:00:00.561) 0:10:50.139 ************ 2025-05-25 00:59:28.424748 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 00:59:28.424757 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 00:59:28.424761 | orchestrator | 2025-05-25 00:59:28.424764 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-25 00:59:28.424772 | orchestrator | Sunday 25 May 2025 00:57:36 +0000 (0:00:06.645) 0:10:56.785 ************ 2025-05-25 00:59:28.424776 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-25 00:59:28.424779 | orchestrator | 2025-05-25 00:59:28.424783 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-25 00:59:28.424787 | orchestrator | Sunday 25 May 2025 00:57:39 +0000 (0:00:02.897) 0:10:59.683 ************ 2025-05-25 00:59:28.424791 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.424795 | orchestrator | 2025-05-25 00:59:28.424798 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-25 00:59:28.424802 | orchestrator | Sunday 25 May 2025 00:57:39 +0000 (0:00:00.558) 0:11:00.242 ************ 2025-05-25 00:59:28.424806 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-25 00:59:28.424810 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-25 00:59:28.424813 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-25 00:59:28.424817 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-25 00:59:28.424823 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-25 00:59:28.424826 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-25 00:59:28.424830 | orchestrator | 2025-05-25 00:59:28.424834 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-25 00:59:28.424838 | orchestrator | Sunday 25 May 2025 00:57:40 +0000 (0:00:00.993) 0:11:01.236 ************ 2025-05-25 00:59:28.424842 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 00:59:28.424845 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 00:59:28.424849 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-25 00:59:28.424853 | orchestrator | 2025-05-25 00:59:28.424859 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-25 00:59:28.424863 | orchestrator | Sunday 25 May 2025 00:57:42 +0000 (0:00:01.761) 0:11:02.997 ************ 2025-05-25 00:59:28.424866 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-25 00:59:28.424870 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 00:59:28.424874 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.424878 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-25 00:59:28.424881 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-25 00:59:28.424897 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.424904 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-25 00:59:28.424910 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-25 00:59:28.424917 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.424923 | orchestrator | 2025-05-25 00:59:28.424929 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-25 00:59:28.424935 | orchestrator | Sunday 25 May 2025 00:57:43 +0000 (0:00:01.149) 0:11:04.146 ************ 2025-05-25 00:59:28.424942 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.424946 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.424950 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.424954 | orchestrator | 2025-05-25 00:59:28.424957 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-25 00:59:28.424961 | orchestrator | Sunday 25 May 2025 00:57:43 +0000 (0:00:00.316) 0:11:04.463 ************ 2025-05-25 00:59:28.424965 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.424969 | orchestrator | 2025-05-25 00:59:28.424972 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-25 00:59:28.424976 | orchestrator | Sunday 25 May 2025 00:57:44 +0000 (0:00:00.765) 0:11:05.229 ************ 2025-05-25 00:59:28.424984 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.424988 | orchestrator | 2025-05-25 00:59:28.424991 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-25 00:59:28.424995 | orchestrator | Sunday 25 May 2025 00:57:45 +0000 (0:00:00.567) 0:11:05.796 ************ 2025-05-25 00:59:28.424999 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.425002 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.425006 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.425010 | orchestrator | 2025-05-25 00:59:28.425013 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-25 00:59:28.425017 | orchestrator | Sunday 25 May 2025 00:57:46 +0000 (0:00:01.420) 0:11:07.216 ************ 2025-05-25 00:59:28.425021 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.425025 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.425028 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.425032 | orchestrator | 2025-05-25 00:59:28.425036 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-25 00:59:28.425039 | orchestrator | Sunday 25 May 2025 00:57:47 +0000 (0:00:01.170) 0:11:08.387 ************ 2025-05-25 00:59:28.425043 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.425047 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.425050 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.425054 | orchestrator | 2025-05-25 00:59:28.425058 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-25 00:59:28.425061 | orchestrator | Sunday 25 May 2025 00:57:49 +0000 (0:00:01.611) 0:11:09.999 ************ 2025-05-25 00:59:28.425065 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.425069 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.425072 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.425076 | orchestrator | 2025-05-25 00:59:28.425080 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-25 00:59:28.425083 | orchestrator | Sunday 25 May 2025 00:57:51 +0000 (0:00:01.857) 0:11:11.857 ************ 2025-05-25 00:59:28.425087 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-25 00:59:28.425091 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-25 00:59:28.425095 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-25 00:59:28.425098 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.425102 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.425106 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.425109 | orchestrator | 2025-05-25 00:59:28.425113 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-25 00:59:28.425117 | orchestrator | Sunday 25 May 2025 00:58:08 +0000 (0:00:17.000) 0:11:28.857 ************ 2025-05-25 00:59:28.425120 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.425124 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.425128 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.425131 | orchestrator | 2025-05-25 00:59:28.425135 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-25 00:59:28.425139 | orchestrator | Sunday 25 May 2025 00:58:08 +0000 (0:00:00.670) 0:11:29.528 ************ 2025-05-25 00:59:28.425143 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.425146 | orchestrator | 2025-05-25 00:59:28.425152 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-25 00:59:28.425156 | orchestrator | Sunday 25 May 2025 00:58:09 +0000 (0:00:00.737) 0:11:30.265 ************ 2025-05-25 00:59:28.425160 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.425164 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.425167 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.425171 | orchestrator | 2025-05-25 00:59:28.425175 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-25 00:59:28.425181 | orchestrator | Sunday 25 May 2025 00:58:10 +0000 (0:00:00.341) 0:11:30.606 ************ 2025-05-25 00:59:28.425185 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.425189 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.425195 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.425198 | orchestrator | 2025-05-25 00:59:28.425202 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-25 00:59:28.425206 | orchestrator | Sunday 25 May 2025 00:58:11 +0000 (0:00:01.169) 0:11:31.776 ************ 2025-05-25 00:59:28.425210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.425213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.425217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.425221 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425224 | orchestrator | 2025-05-25 00:59:28.425228 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-25 00:59:28.425232 | orchestrator | Sunday 25 May 2025 00:58:12 +0000 (0:00:00.868) 0:11:32.645 ************ 2025-05-25 00:59:28.425235 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.425239 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.425243 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.425246 | orchestrator | 2025-05-25 00:59:28.425250 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-25 00:59:28.425254 | orchestrator | Sunday 25 May 2025 00:58:12 +0000 (0:00:00.563) 0:11:33.208 ************ 2025-05-25 00:59:28.425257 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.425261 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.425265 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.425268 | orchestrator | 2025-05-25 00:59:28.425272 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-25 00:59:28.425276 | orchestrator | 2025-05-25 00:59:28.425279 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-25 00:59:28.425283 | orchestrator | Sunday 25 May 2025 00:58:14 +0000 (0:00:01.975) 0:11:35.183 ************ 2025-05-25 00:59:28.425287 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.425291 | orchestrator | 2025-05-25 00:59:28.425294 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-25 00:59:28.425298 | orchestrator | Sunday 25 May 2025 00:58:15 +0000 (0:00:00.718) 0:11:35.902 ************ 2025-05-25 00:59:28.425302 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425305 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425309 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425313 | orchestrator | 2025-05-25 00:59:28.425316 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-25 00:59:28.425320 | orchestrator | Sunday 25 May 2025 00:58:15 +0000 (0:00:00.315) 0:11:36.217 ************ 2025-05-25 00:59:28.425324 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.425328 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.425331 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.425335 | orchestrator | 2025-05-25 00:59:28.425339 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-25 00:59:28.425342 | orchestrator | Sunday 25 May 2025 00:58:16 +0000 (0:00:00.686) 0:11:36.904 ************ 2025-05-25 00:59:28.425346 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.425350 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.425354 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.425357 | orchestrator | 2025-05-25 00:59:28.425361 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-25 00:59:28.425365 | orchestrator | Sunday 25 May 2025 00:58:17 +0000 (0:00:00.678) 0:11:37.582 ************ 2025-05-25 00:59:28.425368 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.425372 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.425379 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.425383 | orchestrator | 2025-05-25 00:59:28.425386 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-25 00:59:28.425390 | orchestrator | Sunday 25 May 2025 00:58:18 +0000 (0:00:00.997) 0:11:38.580 ************ 2025-05-25 00:59:28.425394 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425398 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425401 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425405 | orchestrator | 2025-05-25 00:59:28.425409 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-25 00:59:28.425413 | orchestrator | Sunday 25 May 2025 00:58:18 +0000 (0:00:00.320) 0:11:38.901 ************ 2025-05-25 00:59:28.425417 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425420 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425424 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425428 | orchestrator | 2025-05-25 00:59:28.425432 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-25 00:59:28.425435 | orchestrator | Sunday 25 May 2025 00:58:18 +0000 (0:00:00.316) 0:11:39.217 ************ 2025-05-25 00:59:28.425439 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425443 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425446 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425450 | orchestrator | 2025-05-25 00:59:28.425454 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-25 00:59:28.425458 | orchestrator | Sunday 25 May 2025 00:58:18 +0000 (0:00:00.318) 0:11:39.536 ************ 2025-05-25 00:59:28.425461 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425465 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425469 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425472 | orchestrator | 2025-05-25 00:59:28.425476 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-25 00:59:28.425482 | orchestrator | Sunday 25 May 2025 00:58:19 +0000 (0:00:00.562) 0:11:40.098 ************ 2025-05-25 00:59:28.425485 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425489 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425493 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425496 | orchestrator | 2025-05-25 00:59:28.425500 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-25 00:59:28.425504 | orchestrator | Sunday 25 May 2025 00:58:19 +0000 (0:00:00.309) 0:11:40.407 ************ 2025-05-25 00:59:28.425508 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425511 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425515 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425519 | orchestrator | 2025-05-25 00:59:28.425525 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-25 00:59:28.425529 | orchestrator | Sunday 25 May 2025 00:58:20 +0000 (0:00:00.317) 0:11:40.725 ************ 2025-05-25 00:59:28.425533 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.425536 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.425540 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.425544 | orchestrator | 2025-05-25 00:59:28.425548 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-25 00:59:28.425551 | orchestrator | Sunday 25 May 2025 00:58:20 +0000 (0:00:00.696) 0:11:41.422 ************ 2025-05-25 00:59:28.425555 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425559 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425562 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425566 | orchestrator | 2025-05-25 00:59:28.425570 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-25 00:59:28.425573 | orchestrator | Sunday 25 May 2025 00:58:21 +0000 (0:00:00.605) 0:11:42.027 ************ 2025-05-25 00:59:28.425577 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425581 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425585 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425593 | orchestrator | 2025-05-25 00:59:28.425596 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-25 00:59:28.425600 | orchestrator | Sunday 25 May 2025 00:58:21 +0000 (0:00:00.326) 0:11:42.354 ************ 2025-05-25 00:59:28.425604 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.425609 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.425615 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.425625 | orchestrator | 2025-05-25 00:59:28.425631 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-25 00:59:28.425637 | orchestrator | Sunday 25 May 2025 00:58:22 +0000 (0:00:00.329) 0:11:42.684 ************ 2025-05-25 00:59:28.425643 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.425648 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.425654 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.425660 | orchestrator | 2025-05-25 00:59:28.425665 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-25 00:59:28.425671 | orchestrator | Sunday 25 May 2025 00:58:22 +0000 (0:00:00.314) 0:11:42.999 ************ 2025-05-25 00:59:28.425676 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.425682 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.425688 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.425693 | orchestrator | 2025-05-25 00:59:28.425699 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-25 00:59:28.425706 | orchestrator | Sunday 25 May 2025 00:58:23 +0000 (0:00:00.620) 0:11:43.619 ************ 2025-05-25 00:59:28.425712 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425717 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425724 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425730 | orchestrator | 2025-05-25 00:59:28.425734 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-25 00:59:28.425737 | orchestrator | Sunday 25 May 2025 00:58:23 +0000 (0:00:00.309) 0:11:43.929 ************ 2025-05-25 00:59:28.425741 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425745 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425749 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425752 | orchestrator | 2025-05-25 00:59:28.425756 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-25 00:59:28.425760 | orchestrator | Sunday 25 May 2025 00:58:23 +0000 (0:00:00.310) 0:11:44.240 ************ 2025-05-25 00:59:28.425764 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425767 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425771 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425775 | orchestrator | 2025-05-25 00:59:28.425778 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-25 00:59:28.425782 | orchestrator | Sunday 25 May 2025 00:58:24 +0000 (0:00:00.335) 0:11:44.575 ************ 2025-05-25 00:59:28.425786 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.425789 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.425793 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.425797 | orchestrator | 2025-05-25 00:59:28.425801 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-25 00:59:28.425804 | orchestrator | Sunday 25 May 2025 00:58:24 +0000 (0:00:00.635) 0:11:45.211 ************ 2025-05-25 00:59:28.425808 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425812 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425816 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425819 | orchestrator | 2025-05-25 00:59:28.425823 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-25 00:59:28.425827 | orchestrator | Sunday 25 May 2025 00:58:25 +0000 (0:00:00.339) 0:11:45.551 ************ 2025-05-25 00:59:28.425830 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425834 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425838 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425841 | orchestrator | 2025-05-25 00:59:28.425845 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-25 00:59:28.425856 | orchestrator | Sunday 25 May 2025 00:58:25 +0000 (0:00:00.338) 0:11:45.890 ************ 2025-05-25 00:59:28.425860 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425863 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425867 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425871 | orchestrator | 2025-05-25 00:59:28.425874 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-25 00:59:28.425881 | orchestrator | Sunday 25 May 2025 00:58:25 +0000 (0:00:00.355) 0:11:46.245 ************ 2025-05-25 00:59:28.425903 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425907 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425910 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425914 | orchestrator | 2025-05-25 00:59:28.425918 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-25 00:59:28.425922 | orchestrator | Sunday 25 May 2025 00:58:26 +0000 (0:00:00.637) 0:11:46.882 ************ 2025-05-25 00:59:28.425926 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425929 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425933 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425937 | orchestrator | 2025-05-25 00:59:28.425941 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-25 00:59:28.425948 | orchestrator | Sunday 25 May 2025 00:58:26 +0000 (0:00:00.347) 0:11:47.229 ************ 2025-05-25 00:59:28.425952 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425955 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425959 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425963 | orchestrator | 2025-05-25 00:59:28.425967 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-25 00:59:28.425970 | orchestrator | Sunday 25 May 2025 00:58:26 +0000 (0:00:00.316) 0:11:47.546 ************ 2025-05-25 00:59:28.425976 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.425982 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.425990 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.425998 | orchestrator | 2025-05-25 00:59:28.426004 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-25 00:59:28.426010 | orchestrator | Sunday 25 May 2025 00:58:27 +0000 (0:00:00.398) 0:11:47.944 ************ 2025-05-25 00:59:28.426047 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426054 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426060 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426065 | orchestrator | 2025-05-25 00:59:28.426071 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-25 00:59:28.426077 | orchestrator | Sunday 25 May 2025 00:58:28 +0000 (0:00:00.613) 0:11:48.558 ************ 2025-05-25 00:59:28.426083 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426088 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426093 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426099 | orchestrator | 2025-05-25 00:59:28.426105 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-25 00:59:28.426111 | orchestrator | Sunday 25 May 2025 00:58:28 +0000 (0:00:00.338) 0:11:48.896 ************ 2025-05-25 00:59:28.426117 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426123 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426129 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426135 | orchestrator | 2025-05-25 00:59:28.426141 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-25 00:59:28.426147 | orchestrator | Sunday 25 May 2025 00:58:28 +0000 (0:00:00.360) 0:11:49.257 ************ 2025-05-25 00:59:28.426153 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426158 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426164 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426170 | orchestrator | 2025-05-25 00:59:28.426182 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-25 00:59:28.426188 | orchestrator | Sunday 25 May 2025 00:58:29 +0000 (0:00:00.326) 0:11:49.584 ************ 2025-05-25 00:59:28.426193 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426198 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426204 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426209 | orchestrator | 2025-05-25 00:59:28.426215 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-25 00:59:28.426222 | orchestrator | Sunday 25 May 2025 00:58:29 +0000 (0:00:00.593) 0:11:50.177 ************ 2025-05-25 00:59:28.426228 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-25 00:59:28.426234 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-25 00:59:28.426240 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426246 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-25 00:59:28.426251 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-25 00:59:28.426257 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426262 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-25 00:59:28.426267 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-25 00:59:28.426273 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426279 | orchestrator | 2025-05-25 00:59:28.426284 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-25 00:59:28.426290 | orchestrator | Sunday 25 May 2025 00:58:30 +0000 (0:00:00.373) 0:11:50.551 ************ 2025-05-25 00:59:28.426296 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-25 00:59:28.426301 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-25 00:59:28.426306 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426312 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-25 00:59:28.426317 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-25 00:59:28.426323 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426328 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-25 00:59:28.426333 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-25 00:59:28.426339 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426344 | orchestrator | 2025-05-25 00:59:28.426349 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-25 00:59:28.426354 | orchestrator | Sunday 25 May 2025 00:58:30 +0000 (0:00:00.380) 0:11:50.932 ************ 2025-05-25 00:59:28.426360 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426366 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426371 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426377 | orchestrator | 2025-05-25 00:59:28.426382 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-25 00:59:28.426394 | orchestrator | Sunday 25 May 2025 00:58:30 +0000 (0:00:00.358) 0:11:51.291 ************ 2025-05-25 00:59:28.426400 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426406 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426412 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426417 | orchestrator | 2025-05-25 00:59:28.426423 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-25 00:59:28.426429 | orchestrator | Sunday 25 May 2025 00:58:31 +0000 (0:00:00.627) 0:11:51.918 ************ 2025-05-25 00:59:28.426435 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426440 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426446 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426452 | orchestrator | 2025-05-25 00:59:28.426463 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-25 00:59:28.426469 | orchestrator | Sunday 25 May 2025 00:58:31 +0000 (0:00:00.341) 0:11:52.259 ************ 2025-05-25 00:59:28.426474 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426485 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426490 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426495 | orchestrator | 2025-05-25 00:59:28.426501 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-25 00:59:28.426506 | orchestrator | Sunday 25 May 2025 00:58:32 +0000 (0:00:00.326) 0:11:52.586 ************ 2025-05-25 00:59:28.426512 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426517 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426522 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426528 | orchestrator | 2025-05-25 00:59:28.426533 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-25 00:59:28.426538 | orchestrator | Sunday 25 May 2025 00:58:32 +0000 (0:00:00.341) 0:11:52.927 ************ 2025-05-25 00:59:28.426543 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426549 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426555 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426561 | orchestrator | 2025-05-25 00:59:28.426567 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-25 00:59:28.426573 | orchestrator | Sunday 25 May 2025 00:58:32 +0000 (0:00:00.592) 0:11:53.519 ************ 2025-05-25 00:59:28.426579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.426584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.426589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.426596 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426602 | orchestrator | 2025-05-25 00:59:28.426608 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-25 00:59:28.426614 | orchestrator | Sunday 25 May 2025 00:58:33 +0000 (0:00:00.426) 0:11:53.946 ************ 2025-05-25 00:59:28.426619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.426626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.426632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.426639 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426644 | orchestrator | 2025-05-25 00:59:28.426650 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-25 00:59:28.426656 | orchestrator | Sunday 25 May 2025 00:58:33 +0000 (0:00:00.426) 0:11:54.372 ************ 2025-05-25 00:59:28.426662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.426668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.426675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.426679 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426683 | orchestrator | 2025-05-25 00:59:28.426686 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.426690 | orchestrator | Sunday 25 May 2025 00:58:34 +0000 (0:00:00.462) 0:11:54.834 ************ 2025-05-25 00:59:28.426694 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426698 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426701 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426706 | orchestrator | 2025-05-25 00:59:28.426709 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-25 00:59:28.426713 | orchestrator | Sunday 25 May 2025 00:58:34 +0000 (0:00:00.334) 0:11:55.169 ************ 2025-05-25 00:59:28.426717 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-25 00:59:28.426721 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426725 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-25 00:59:28.426728 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426732 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-25 00:59:28.426736 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426739 | orchestrator | 2025-05-25 00:59:28.426743 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-25 00:59:28.426752 | orchestrator | Sunday 25 May 2025 00:58:35 +0000 (0:00:00.456) 0:11:55.625 ************ 2025-05-25 00:59:28.426756 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426759 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426763 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426767 | orchestrator | 2025-05-25 00:59:28.426771 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 00:59:28.426774 | orchestrator | Sunday 25 May 2025 00:58:35 +0000 (0:00:00.588) 0:11:56.214 ************ 2025-05-25 00:59:28.426778 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426782 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426786 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426789 | orchestrator | 2025-05-25 00:59:28.426793 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-25 00:59:28.426797 | orchestrator | Sunday 25 May 2025 00:58:35 +0000 (0:00:00.320) 0:11:56.534 ************ 2025-05-25 00:59:28.426800 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-25 00:59:28.426804 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426808 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-25 00:59:28.426816 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426820 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-25 00:59:28.426824 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426828 | orchestrator | 2025-05-25 00:59:28.426832 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-25 00:59:28.426835 | orchestrator | Sunday 25 May 2025 00:58:36 +0000 (0:00:00.493) 0:11:57.028 ************ 2025-05-25 00:59:28.426839 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.426843 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426850 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.426854 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426858 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-25 00:59:28.426862 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426866 | orchestrator | 2025-05-25 00:59:28.426869 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-25 00:59:28.426873 | orchestrator | Sunday 25 May 2025 00:58:36 +0000 (0:00:00.340) 0:11:57.368 ************ 2025-05-25 00:59:28.426879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.426923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.426932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.426939 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.426945 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-25 00:59:28.426951 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-25 00:59:28.426957 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-25 00:59:28.426963 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.426969 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-25 00:59:28.426975 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-25 00:59:28.426981 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-25 00:59:28.426987 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.426994 | orchestrator | 2025-05-25 00:59:28.427000 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-25 00:59:28.427006 | orchestrator | Sunday 25 May 2025 00:58:37 +0000 (0:00:00.888) 0:11:58.256 ************ 2025-05-25 00:59:28.427011 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.427015 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.427026 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.427030 | orchestrator | 2025-05-25 00:59:28.427034 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-25 00:59:28.427038 | orchestrator | Sunday 25 May 2025 00:58:38 +0000 (0:00:00.545) 0:11:58.802 ************ 2025-05-25 00:59:28.427041 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 00:59:28.427045 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.427049 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-25 00:59:28.427052 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.427056 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-25 00:59:28.427060 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.427063 | orchestrator | 2025-05-25 00:59:28.427067 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-25 00:59:28.427071 | orchestrator | Sunday 25 May 2025 00:58:39 +0000 (0:00:00.797) 0:11:59.599 ************ 2025-05-25 00:59:28.427075 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.427078 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.427082 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.427086 | orchestrator | 2025-05-25 00:59:28.427089 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-25 00:59:28.427093 | orchestrator | Sunday 25 May 2025 00:58:39 +0000 (0:00:00.547) 0:12:00.147 ************ 2025-05-25 00:59:28.427097 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.427100 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.427104 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.427108 | orchestrator | 2025-05-25 00:59:28.427111 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-25 00:59:28.427115 | orchestrator | Sunday 25 May 2025 00:58:40 +0000 (0:00:00.794) 0:12:00.941 ************ 2025-05-25 00:59:28.427119 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.427123 | orchestrator | 2025-05-25 00:59:28.427126 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-25 00:59:28.427130 | orchestrator | Sunday 25 May 2025 00:58:40 +0000 (0:00:00.534) 0:12:01.476 ************ 2025-05-25 00:59:28.427134 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-25 00:59:28.427138 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-25 00:59:28.427141 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-25 00:59:28.427145 | orchestrator | 2025-05-25 00:59:28.427149 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-25 00:59:28.427152 | orchestrator | Sunday 25 May 2025 00:58:41 +0000 (0:00:00.956) 0:12:02.432 ************ 2025-05-25 00:59:28.427156 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 00:59:28.427160 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 00:59:28.427164 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-25 00:59:28.427168 | orchestrator | 2025-05-25 00:59:28.427173 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-25 00:59:28.427181 | orchestrator | Sunday 25 May 2025 00:58:43 +0000 (0:00:01.784) 0:12:04.217 ************ 2025-05-25 00:59:28.427194 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-25 00:59:28.427200 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-25 00:59:28.427206 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.427211 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-25 00:59:28.427217 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-25 00:59:28.427222 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.427228 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-25 00:59:28.427234 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-25 00:59:28.427240 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.427245 | orchestrator | 2025-05-25 00:59:28.427256 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-25 00:59:28.427266 | orchestrator | Sunday 25 May 2025 00:58:44 +0000 (0:00:01.143) 0:12:05.360 ************ 2025-05-25 00:59:28.427273 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.427279 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.427285 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.427290 | orchestrator | 2025-05-25 00:59:28.427295 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-25 00:59:28.427301 | orchestrator | Sunday 25 May 2025 00:58:45 +0000 (0:00:00.314) 0:12:05.674 ************ 2025-05-25 00:59:28.427307 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.427312 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.427318 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.427324 | orchestrator | 2025-05-25 00:59:28.427330 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-25 00:59:28.427335 | orchestrator | Sunday 25 May 2025 00:58:45 +0000 (0:00:00.569) 0:12:06.244 ************ 2025-05-25 00:59:28.427341 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-25 00:59:28.427347 | orchestrator | 2025-05-25 00:59:28.427353 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-25 00:59:28.427358 | orchestrator | Sunday 25 May 2025 00:58:45 +0000 (0:00:00.227) 0:12:06.472 ************ 2025-05-25 00:59:28.427364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427394 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.427400 | orchestrator | 2025-05-25 00:59:28.427406 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-25 00:59:28.427412 | orchestrator | Sunday 25 May 2025 00:58:46 +0000 (0:00:00.635) 0:12:07.108 ************ 2025-05-25 00:59:28.427418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427441 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.427445 | orchestrator | 2025-05-25 00:59:28.427448 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-25 00:59:28.427452 | orchestrator | Sunday 25 May 2025 00:58:47 +0000 (0:00:00.890) 0:12:07.998 ************ 2025-05-25 00:59:28.427456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-25 00:59:28.427479 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.427483 | orchestrator | 2025-05-25 00:59:28.427487 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-25 00:59:28.427491 | orchestrator | Sunday 25 May 2025 00:58:48 +0000 (0:00:00.879) 0:12:08.878 ************ 2025-05-25 00:59:28.427499 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-25 00:59:28.427505 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-25 00:59:28.427509 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-25 00:59:28.427515 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-25 00:59:28.427519 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-25 00:59:28.427523 | orchestrator | 2025-05-25 00:59:28.427527 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-25 00:59:28.427531 | orchestrator | Sunday 25 May 2025 00:59:10 +0000 (0:00:22.610) 0:12:31.489 ************ 2025-05-25 00:59:28.427535 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.427538 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.427542 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.427546 | orchestrator | 2025-05-25 00:59:28.427549 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-25 00:59:28.427553 | orchestrator | Sunday 25 May 2025 00:59:11 +0000 (0:00:00.470) 0:12:31.959 ************ 2025-05-25 00:59:28.427557 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.427560 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.427564 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.427568 | orchestrator | 2025-05-25 00:59:28.427572 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-25 00:59:28.427575 | orchestrator | Sunday 25 May 2025 00:59:11 +0000 (0:00:00.345) 0:12:32.305 ************ 2025-05-25 00:59:28.427579 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.427583 | orchestrator | 2025-05-25 00:59:28.427587 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-25 00:59:28.427591 | orchestrator | Sunday 25 May 2025 00:59:12 +0000 (0:00:00.588) 0:12:32.894 ************ 2025-05-25 00:59:28.427594 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.427598 | orchestrator | 2025-05-25 00:59:28.427602 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-25 00:59:28.427606 | orchestrator | Sunday 25 May 2025 00:59:13 +0000 (0:00:00.781) 0:12:33.675 ************ 2025-05-25 00:59:28.427610 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.427614 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.427617 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.427621 | orchestrator | 2025-05-25 00:59:28.427625 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-25 00:59:28.427631 | orchestrator | Sunday 25 May 2025 00:59:14 +0000 (0:00:01.153) 0:12:34.829 ************ 2025-05-25 00:59:28.427635 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.427639 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.427642 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.427646 | orchestrator | 2025-05-25 00:59:28.427650 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-25 00:59:28.427654 | orchestrator | Sunday 25 May 2025 00:59:15 +0000 (0:00:01.107) 0:12:35.937 ************ 2025-05-25 00:59:28.427657 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.427661 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.427665 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.427668 | orchestrator | 2025-05-25 00:59:28.427672 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-25 00:59:28.427676 | orchestrator | Sunday 25 May 2025 00:59:17 +0000 (0:00:01.967) 0:12:37.904 ************ 2025-05-25 00:59:28.427680 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-25 00:59:28.427684 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-25 00:59:28.427687 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-25 00:59:28.427691 | orchestrator | 2025-05-25 00:59:28.427695 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-25 00:59:28.427698 | orchestrator | Sunday 25 May 2025 00:59:19 +0000 (0:00:01.836) 0:12:39.741 ************ 2025-05-25 00:59:28.427702 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.427706 | orchestrator | skipping: [testbed-node-4] 2025-05-25 00:59:28.427710 | orchestrator | skipping: [testbed-node-5] 2025-05-25 00:59:28.427713 | orchestrator | 2025-05-25 00:59:28.427717 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-25 00:59:28.427721 | orchestrator | Sunday 25 May 2025 00:59:20 +0000 (0:00:01.162) 0:12:40.904 ************ 2025-05-25 00:59:28.427725 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.427728 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.427732 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.427736 | orchestrator | 2025-05-25 00:59:28.427740 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-25 00:59:28.427743 | orchestrator | Sunday 25 May 2025 00:59:21 +0000 (0:00:00.653) 0:12:41.558 ************ 2025-05-25 00:59:28.427750 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 00:59:28.427754 | orchestrator | 2025-05-25 00:59:28.427757 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-25 00:59:28.427761 | orchestrator | Sunday 25 May 2025 00:59:21 +0000 (0:00:00.750) 0:12:42.308 ************ 2025-05-25 00:59:28.427765 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.427768 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.427772 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.427776 | orchestrator | 2025-05-25 00:59:28.427780 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-25 00:59:28.427783 | orchestrator | Sunday 25 May 2025 00:59:22 +0000 (0:00:00.348) 0:12:42.656 ************ 2025-05-25 00:59:28.427789 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.427793 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.427797 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.427801 | orchestrator | 2025-05-25 00:59:28.427804 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-25 00:59:28.427808 | orchestrator | Sunday 25 May 2025 00:59:23 +0000 (0:00:01.200) 0:12:43.857 ************ 2025-05-25 00:59:28.427812 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 00:59:28.427815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 00:59:28.427822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 00:59:28.427826 | orchestrator | skipping: [testbed-node-3] 2025-05-25 00:59:28.427829 | orchestrator | 2025-05-25 00:59:28.427833 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-25 00:59:28.427837 | orchestrator | Sunday 25 May 2025 00:59:24 +0000 (0:00:01.149) 0:12:45.006 ************ 2025-05-25 00:59:28.427841 | orchestrator | ok: [testbed-node-3] 2025-05-25 00:59:28.427844 | orchestrator | ok: [testbed-node-4] 2025-05-25 00:59:28.427849 | orchestrator | ok: [testbed-node-5] 2025-05-25 00:59:28.427855 | orchestrator | 2025-05-25 00:59:28.427860 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-25 00:59:28.427871 | orchestrator | Sunday 25 May 2025 00:59:24 +0000 (0:00:00.343) 0:12:45.349 ************ 2025-05-25 00:59:28.427877 | orchestrator | changed: [testbed-node-3] 2025-05-25 00:59:28.427882 | orchestrator | changed: [testbed-node-4] 2025-05-25 00:59:28.427903 | orchestrator | changed: [testbed-node-5] 2025-05-25 00:59:28.427909 | orchestrator | 2025-05-25 00:59:28.427915 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:59:28.427920 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-25 00:59:28.427926 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-25 00:59:28.427932 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-25 00:59:28.427938 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-25 00:59:28.427944 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-25 00:59:28.427950 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-25 00:59:28.427957 | orchestrator | 2025-05-25 00:59:28.427962 | orchestrator | 2025-05-25 00:59:28.427968 | orchestrator | 2025-05-25 00:59:28.427974 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:59:28.427980 | orchestrator | Sunday 25 May 2025 00:59:26 +0000 (0:00:01.258) 0:12:46.608 ************ 2025-05-25 00:59:28.427986 | orchestrator | =============================================================================== 2025-05-25 00:59:28.427992 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 52.26s 2025-05-25 00:59:28.427998 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 39.72s 2025-05-25 00:59:28.428003 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 22.61s 2025-05-25 00:59:28.428009 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.46s 2025-05-25 00:59:28.428016 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.00s 2025-05-25 00:59:28.428022 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.26s 2025-05-25 00:59:28.428027 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.62s 2025-05-25 00:59:28.428033 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.27s 2025-05-25 00:59:28.428040 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.04s 2025-05-25 00:59:28.428045 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.65s 2025-05-25 00:59:28.428052 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.52s 2025-05-25 00:59:28.428058 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.83s 2025-05-25 00:59:28.428069 | orchestrator | ceph-config : create ceph initial directories --------------------------- 5.71s 2025-05-25 00:59:28.428076 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 4.53s 2025-05-25 00:59:28.428083 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.22s 2025-05-25 00:59:28.428090 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 4.13s 2025-05-25 00:59:28.428094 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.90s 2025-05-25 00:59:28.428098 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.39s 2025-05-25 00:59:28.428102 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.26s 2025-05-25 00:59:28.428105 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 2.95s 2025-05-25 00:59:28.428109 | orchestrator | 2025-05-25 00:59:28 | INFO  | Task ea65ca1d-644c-42a9-bcc0-cfb062558f9b is in state SUCCESS 2025-05-25 00:59:28.428116 | orchestrator | 2025-05-25 00:59:28 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:59:28.428120 | orchestrator | 2025-05-25 00:59:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:28.428124 | orchestrator | 2025-05-25 00:59:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:28.428128 | orchestrator | 2025-05-25 00:59:28 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 00:59:28.428131 | orchestrator | 2025-05-25 00:59:28 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:28.428135 | orchestrator | 2025-05-25 00:59:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:31.448149 | orchestrator | 2025-05-25 00:59:31 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state STARTED 2025-05-25 00:59:31.450361 | orchestrator | 2025-05-25 00:59:31 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state STARTED 2025-05-25 00:59:31.451923 | orchestrator | 2025-05-25 00:59:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:31.455819 | orchestrator | 2025-05-25 00:59:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:31.455856 | orchestrator | 2025-05-25 00:59:31 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 00:59:31.457435 | orchestrator | 2025-05-25 00:59:31 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:31.457468 | orchestrator | 2025-05-25 00:59:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:34.518834 | orchestrator | 2025-05-25 00:59:34.519098 | orchestrator | 2025-05-25 00:59:34.519121 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:59:34.519134 | orchestrator | 2025-05-25 00:59:34.519145 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:59:34.519157 | orchestrator | Sunday 25 May 2025 00:58:26 +0000 (0:00:00.307) 0:00:00.307 ************ 2025-05-25 00:59:34.519169 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:34.519182 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:34.519308 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:34.519322 | orchestrator | 2025-05-25 00:59:34.519333 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:59:34.519345 | orchestrator | Sunday 25 May 2025 00:58:26 +0000 (0:00:00.374) 0:00:00.681 ************ 2025-05-25 00:59:34.519355 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-25 00:59:34.519367 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-25 00:59:34.519378 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-25 00:59:34.519388 | orchestrator | 2025-05-25 00:59:34.519576 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-25 00:59:34.519594 | orchestrator | 2025-05-25 00:59:34.519606 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-25 00:59:34.519616 | orchestrator | Sunday 25 May 2025 00:58:27 +0000 (0:00:00.401) 0:00:01.082 ************ 2025-05-25 00:59:34.519627 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:34.519639 | orchestrator | 2025-05-25 00:59:34.519650 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-25 00:59:34.519660 | orchestrator | Sunday 25 May 2025 00:58:27 +0000 (0:00:00.754) 0:00:01.837 ************ 2025-05-25 00:59:34.519671 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (5 retries left). 2025-05-25 00:59:34.519683 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (4 retries left). 2025-05-25 00:59:34.519693 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (3 retries left). 2025-05-25 00:59:34.519704 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (2 retries left). 2025-05-25 00:59:34.519714 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (1 retries left). 2025-05-25 00:59:34.519776 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1748134770.3733413-3149-119064142789359/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1748134770.3733413-3149-119064142789359/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1748134770.3733413-3149-119064142789359/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_7xzgtodg/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_7xzgtodg/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_7xzgtodg/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 415, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_7xzgtodg/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_7xzgtodg/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-05-25 00:59:34.519802 | orchestrator | 2025-05-25 00:59:34.519814 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:59:34.519825 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-25 00:59:34.519837 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:59:34.519917 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:59:34.519929 | orchestrator | 2025-05-25 00:59:34.519940 | orchestrator | 2025-05-25 00:59:34.519951 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:59:34.519962 | orchestrator | Sunday 25 May 2025 00:59:31 +0000 (0:01:03.693) 0:01:05.531 ************ 2025-05-25 00:59:34.520107 | orchestrator | =============================================================================== 2025-05-25 00:59:34.520137 | orchestrator | service-ks-register : magnum | Creating services ----------------------- 63.69s 2025-05-25 00:59:34.520148 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.75s 2025-05-25 00:59:34.520159 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2025-05-25 00:59:34.520170 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-05-25 00:59:34.520181 | orchestrator | 2025-05-25 00:59:34 | INFO  | Task f688edd1-6169-42e4-9982-78a3c0f064d7 is in state SUCCESS 2025-05-25 00:59:34.520192 | orchestrator | 2025-05-25 00:59:34 | INFO  | Task c87a997c-f55b-4564-b8f3-7fa89efbff9a is in state SUCCESS 2025-05-25 00:59:34.520202 | orchestrator | 2025-05-25 00:59:34.520213 | orchestrator | 2025-05-25 00:59:34.520223 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 00:59:34.520235 | orchestrator | 2025-05-25 00:59:34.520245 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 00:59:34.520256 | orchestrator | Sunday 25 May 2025 00:58:25 +0000 (0:00:00.326) 0:00:00.326 ************ 2025-05-25 00:59:34.520266 | orchestrator | ok: [testbed-node-0] 2025-05-25 00:59:34.520277 | orchestrator | ok: [testbed-node-1] 2025-05-25 00:59:34.520329 | orchestrator | ok: [testbed-node-2] 2025-05-25 00:59:34.520340 | orchestrator | 2025-05-25 00:59:34.520351 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 00:59:34.520362 | orchestrator | Sunday 25 May 2025 00:58:26 +0000 (0:00:00.435) 0:00:00.762 ************ 2025-05-25 00:59:34.520373 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-25 00:59:34.520383 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-25 00:59:34.520394 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-25 00:59:34.520405 | orchestrator | 2025-05-25 00:59:34.520415 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-25 00:59:34.520426 | orchestrator | 2025-05-25 00:59:34.520436 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-25 00:59:34.520447 | orchestrator | Sunday 25 May 2025 00:58:26 +0000 (0:00:00.295) 0:00:01.057 ************ 2025-05-25 00:59:34.520458 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 00:59:34.520469 | orchestrator | 2025-05-25 00:59:34.520479 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-25 00:59:34.520490 | orchestrator | Sunday 25 May 2025 00:58:27 +0000 (0:00:00.828) 0:00:01.886 ************ 2025-05-25 00:59:34.520501 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (5 retries left). 2025-05-25 00:59:34.520511 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (4 retries left). 2025-05-25 00:59:34.520522 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (3 retries left). 2025-05-25 00:59:34.520533 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (2 retries left). 2025-05-25 00:59:34.520543 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (1 retries left). 2025-05-25 00:59:34.520582 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1748134770.2967682-3138-39459745966915/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1748134770.2967682-3138-39459745966915/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1748134770.2967682-3138-39459745966915/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_trla6n0w/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_trla6n0w/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_trla6n0w/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 415, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_trla6n0w/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_trla6n0w/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-05-25 00:59:34.520609 | orchestrator | 2025-05-25 00:59:34.520627 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 00:59:34.520639 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-25 00:59:34.520650 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:59:34.520661 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 00:59:34.520672 | orchestrator | 2025-05-25 00:59:34.520682 | orchestrator | 2025-05-25 00:59:34.520693 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 00:59:34.520704 | orchestrator | Sunday 25 May 2025 00:59:31 +0000 (0:01:03.832) 0:01:05.718 ************ 2025-05-25 00:59:34.520714 | orchestrator | =============================================================================== 2025-05-25 00:59:34.520725 | orchestrator | service-ks-register : placement | Creating services -------------------- 63.83s 2025-05-25 00:59:34.520738 | orchestrator | placement : include_tasks ----------------------------------------------- 0.83s 2025-05-25 00:59:34.520750 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-05-25 00:59:34.520762 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2025-05-25 00:59:34.520774 | orchestrator | 2025-05-25 00:59:34 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 00:59:34.521258 | orchestrator | 2025-05-25 00:59:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:34.522683 | orchestrator | 2025-05-25 00:59:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:34.523797 | orchestrator | 2025-05-25 00:59:34 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 00:59:34.524762 | orchestrator | 2025-05-25 00:59:34 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:34.524784 | orchestrator | 2025-05-25 00:59:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:37.572735 | orchestrator | 2025-05-25 00:59:37 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 00:59:37.574357 | orchestrator | 2025-05-25 00:59:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:37.576779 | orchestrator | 2025-05-25 00:59:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:37.579476 | orchestrator | 2025-05-25 00:59:37 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 00:59:37.581681 | orchestrator | 2025-05-25 00:59:37 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:37.582148 | orchestrator | 2025-05-25 00:59:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:40.637638 | orchestrator | 2025-05-25 00:59:40 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 00:59:40.639755 | orchestrator | 2025-05-25 00:59:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:40.641310 | orchestrator | 2025-05-25 00:59:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:40.644320 | orchestrator | 2025-05-25 00:59:40 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 00:59:40.645751 | orchestrator | 2025-05-25 00:59:40 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:40.645777 | orchestrator | 2025-05-25 00:59:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:43.686728 | orchestrator | 2025-05-25 00:59:43 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 00:59:43.687992 | orchestrator | 2025-05-25 00:59:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:43.688865 | orchestrator | 2025-05-25 00:59:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:43.691800 | orchestrator | 2025-05-25 00:59:43 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 00:59:43.693944 | orchestrator | 2025-05-25 00:59:43 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:43.693993 | orchestrator | 2025-05-25 00:59:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:46.745432 | orchestrator | 2025-05-25 00:59:46 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 00:59:46.748046 | orchestrator | 2025-05-25 00:59:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:46.749800 | orchestrator | 2025-05-25 00:59:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:46.752954 | orchestrator | 2025-05-25 00:59:46 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 00:59:46.754720 | orchestrator | 2025-05-25 00:59:46 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:46.754757 | orchestrator | 2025-05-25 00:59:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:49.812021 | orchestrator | 2025-05-25 00:59:49 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 00:59:49.813183 | orchestrator | 2025-05-25 00:59:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:49.814531 | orchestrator | 2025-05-25 00:59:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:49.818320 | orchestrator | 2025-05-25 00:59:49 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 00:59:49.821838 | orchestrator | 2025-05-25 00:59:49 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:49.821866 | orchestrator | 2025-05-25 00:59:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:52.876108 | orchestrator | 2025-05-25 00:59:52 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 00:59:52.876205 | orchestrator | 2025-05-25 00:59:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:52.878555 | orchestrator | 2025-05-25 00:59:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:52.878983 | orchestrator | 2025-05-25 00:59:52 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 00:59:52.880042 | orchestrator | 2025-05-25 00:59:52 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:52.880074 | orchestrator | 2025-05-25 00:59:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:55.918890 | orchestrator | 2025-05-25 00:59:55 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 00:59:55.919763 | orchestrator | 2025-05-25 00:59:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:55.921651 | orchestrator | 2025-05-25 00:59:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:55.923132 | orchestrator | 2025-05-25 00:59:55 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 00:59:55.924383 | orchestrator | 2025-05-25 00:59:55 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:55.924414 | orchestrator | 2025-05-25 00:59:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 00:59:58.972836 | orchestrator | 2025-05-25 00:59:58 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 00:59:58.973588 | orchestrator | 2025-05-25 00:59:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 00:59:58.975026 | orchestrator | 2025-05-25 00:59:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 00:59:58.977651 | orchestrator | 2025-05-25 00:59:58 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 00:59:58.980369 | orchestrator | 2025-05-25 00:59:58 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 00:59:58.980596 | orchestrator | 2025-05-25 00:59:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:02.032725 | orchestrator | 2025-05-25 01:00:02 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 01:00:02.034270 | orchestrator | 2025-05-25 01:00:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:02.035217 | orchestrator | 2025-05-25 01:00:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:02.036535 | orchestrator | 2025-05-25 01:00:02 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:02.037572 | orchestrator | 2025-05-25 01:00:02 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:02.037606 | orchestrator | 2025-05-25 01:00:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:05.099966 | orchestrator | 2025-05-25 01:00:05 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 01:00:05.101587 | orchestrator | 2025-05-25 01:00:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:05.103518 | orchestrator | 2025-05-25 01:00:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:05.105471 | orchestrator | 2025-05-25 01:00:05 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:05.105500 | orchestrator | 2025-05-25 01:00:05 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:05.105513 | orchestrator | 2025-05-25 01:00:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:08.147830 | orchestrator | 2025-05-25 01:00:08 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 01:00:08.149190 | orchestrator | 2025-05-25 01:00:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:08.151447 | orchestrator | 2025-05-25 01:00:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:08.153210 | orchestrator | 2025-05-25 01:00:08 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:08.154966 | orchestrator | 2025-05-25 01:00:08 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:08.155007 | orchestrator | 2025-05-25 01:00:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:11.211078 | orchestrator | 2025-05-25 01:00:11 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 01:00:11.211495 | orchestrator | 2025-05-25 01:00:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:11.213715 | orchestrator | 2025-05-25 01:00:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:11.216605 | orchestrator | 2025-05-25 01:00:11 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:11.218604 | orchestrator | 2025-05-25 01:00:11 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:11.218665 | orchestrator | 2025-05-25 01:00:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:14.272366 | orchestrator | 2025-05-25 01:00:14 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 01:00:14.273539 | orchestrator | 2025-05-25 01:00:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:14.275319 | orchestrator | 2025-05-25 01:00:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:14.276719 | orchestrator | 2025-05-25 01:00:14 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:14.277782 | orchestrator | 2025-05-25 01:00:14 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:14.277882 | orchestrator | 2025-05-25 01:00:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:17.322484 | orchestrator | 2025-05-25 01:00:17 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 01:00:17.323608 | orchestrator | 2025-05-25 01:00:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:17.325424 | orchestrator | 2025-05-25 01:00:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:17.326808 | orchestrator | 2025-05-25 01:00:17 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:17.328653 | orchestrator | 2025-05-25 01:00:17 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:17.328677 | orchestrator | 2025-05-25 01:00:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:20.378947 | orchestrator | 2025-05-25 01:00:20 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 01:00:20.380526 | orchestrator | 2025-05-25 01:00:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:20.382336 | orchestrator | 2025-05-25 01:00:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:20.383868 | orchestrator | 2025-05-25 01:00:20 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:20.387028 | orchestrator | 2025-05-25 01:00:20 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:20.387058 | orchestrator | 2025-05-25 01:00:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:23.434749 | orchestrator | 2025-05-25 01:00:23 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 01:00:23.436213 | orchestrator | 2025-05-25 01:00:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:23.438248 | orchestrator | 2025-05-25 01:00:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:23.439574 | orchestrator | 2025-05-25 01:00:23 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:23.440843 | orchestrator | 2025-05-25 01:00:23 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:23.440867 | orchestrator | 2025-05-25 01:00:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:26.490318 | orchestrator | 2025-05-25 01:00:26 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 01:00:26.490453 | orchestrator | 2025-05-25 01:00:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:26.491821 | orchestrator | 2025-05-25 01:00:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:26.493143 | orchestrator | 2025-05-25 01:00:26 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:26.494705 | orchestrator | 2025-05-25 01:00:26 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:26.494733 | orchestrator | 2025-05-25 01:00:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:29.546869 | orchestrator | 2025-05-25 01:00:29 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 01:00:29.547952 | orchestrator | 2025-05-25 01:00:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:29.550193 | orchestrator | 2025-05-25 01:00:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:29.551886 | orchestrator | 2025-05-25 01:00:29 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:29.553557 | orchestrator | 2025-05-25 01:00:29 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:29.553590 | orchestrator | 2025-05-25 01:00:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:32.604579 | orchestrator | 2025-05-25 01:00:32 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 01:00:32.606112 | orchestrator | 2025-05-25 01:00:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:32.607838 | orchestrator | 2025-05-25 01:00:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:32.609336 | orchestrator | 2025-05-25 01:00:32 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:32.610577 | orchestrator | 2025-05-25 01:00:32 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:32.610652 | orchestrator | 2025-05-25 01:00:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:35.661637 | orchestrator | 2025-05-25 01:00:35 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 01:00:35.663241 | orchestrator | 2025-05-25 01:00:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:35.664845 | orchestrator | 2025-05-25 01:00:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:35.666470 | orchestrator | 2025-05-25 01:00:35 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:35.668162 | orchestrator | 2025-05-25 01:00:35 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:35.668286 | orchestrator | 2025-05-25 01:00:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:38.718750 | orchestrator | 2025-05-25 01:00:38 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state STARTED 2025-05-25 01:00:38.720174 | orchestrator | 2025-05-25 01:00:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:38.721931 | orchestrator | 2025-05-25 01:00:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:38.722773 | orchestrator | 2025-05-25 01:00:38 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:38.724319 | orchestrator | 2025-05-25 01:00:38 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:38.724344 | orchestrator | 2025-05-25 01:00:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:41.775570 | orchestrator | 2025-05-25 01:00:41.775679 | orchestrator | 2025-05-25 01:00:41.775695 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 01:00:41.775707 | orchestrator | 2025-05-25 01:00:41.775718 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 01:00:41.775730 | orchestrator | Sunday 25 May 2025 00:59:34 +0000 (0:00:00.317) 0:00:00.317 ************ 2025-05-25 01:00:41.775741 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:00:41.775754 | orchestrator | ok: [testbed-node-1] 2025-05-25 01:00:41.775764 | orchestrator | ok: [testbed-node-2] 2025-05-25 01:00:41.775775 | orchestrator | 2025-05-25 01:00:41.775786 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 01:00:41.775797 | orchestrator | Sunday 25 May 2025 00:59:35 +0000 (0:00:00.373) 0:00:00.691 ************ 2025-05-25 01:00:41.775808 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-25 01:00:41.775819 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-25 01:00:41.775830 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-25 01:00:41.775841 | orchestrator | 2025-05-25 01:00:41.775851 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-25 01:00:41.775862 | orchestrator | 2025-05-25 01:00:41.775873 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-25 01:00:41.775884 | orchestrator | Sunday 25 May 2025 00:59:35 +0000 (0:00:00.340) 0:00:01.032 ************ 2025-05-25 01:00:41.775896 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 01:00:41.775907 | orchestrator | 2025-05-25 01:00:41.775918 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-25 01:00:41.775929 | orchestrator | Sunday 25 May 2025 00:59:36 +0000 (0:00:00.721) 0:00:01.753 ************ 2025-05-25 01:00:41.775940 | orchestrator | FAILED - RETRYING: [testbed-node-0]: octavia | Creating services (5 retries left). 2025-05-25 01:00:41.775950 | orchestrator | FAILED - RETRYING: [testbed-node-0]: octavia | Creating services (4 retries left). 2025-05-25 01:00:41.775961 | orchestrator | FAILED - RETRYING: [testbed-node-0]: octavia | Creating services (3 retries left). 2025-05-25 01:00:41.775972 | orchestrator | FAILED - RETRYING: [testbed-node-0]: octavia | Creating services (2 retries left). 2025-05-25 01:00:41.776037 | orchestrator | FAILED - RETRYING: [testbed-node-0]: octavia | Creating services (1 retries left). 2025-05-25 01:00:41.776105 | orchestrator | failed: [testbed-node-0] (item=octavia (load-balancer)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Octavia Load Balancing Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9876"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9876"}], "name": "octavia", "type": "load-balancer"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1748134838.891023-3402-274329899579079/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1748134838.891023-3402-274329899579079/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1748134838.891023-3402-274329899579079/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_ielh2hoy/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_ielh2hoy/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_ielh2hoy/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 415, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_ielh2hoy/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_ielh2hoy/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 289, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-05-25 01:00:41.776156 | orchestrator | 2025-05-25 01:00:41.776171 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 01:00:41.776184 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-25 01:00:41.776199 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 01:00:41.776214 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 01:00:41.776226 | orchestrator | 2025-05-25 01:00:41.776239 | orchestrator | 2025-05-25 01:00:41.776250 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 01:00:41.776261 | orchestrator | Sunday 25 May 2025 01:00:40 +0000 (0:01:03.916) 0:01:05.670 ************ 2025-05-25 01:00:41.776278 | orchestrator | =============================================================================== 2025-05-25 01:00:41.776289 | orchestrator | service-ks-register : octavia | Creating services ---------------------- 63.92s 2025-05-25 01:00:41.776300 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.72s 2025-05-25 01:00:41.776311 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-05-25 01:00:41.776437 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2025-05-25 01:00:41.776455 | orchestrator | 2025-05-25 01:00:41 | INFO  | Task b5602ad5-7e7c-4e5b-b658-52234c17f740 is in state SUCCESS 2025-05-25 01:00:41.776472 | orchestrator | 2025-05-25 01:00:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:41.777880 | orchestrator | 2025-05-25 01:00:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:41.778968 | orchestrator | 2025-05-25 01:00:41 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:41.779945 | orchestrator | 2025-05-25 01:00:41 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:41.780011 | orchestrator | 2025-05-25 01:00:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:44.824695 | orchestrator | 2025-05-25 01:00:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:44.826413 | orchestrator | 2025-05-25 01:00:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:44.827945 | orchestrator | 2025-05-25 01:00:44 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:44.829404 | orchestrator | 2025-05-25 01:00:44 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:44.829423 | orchestrator | 2025-05-25 01:00:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:47.876499 | orchestrator | 2025-05-25 01:00:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:47.878964 | orchestrator | 2025-05-25 01:00:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:47.881617 | orchestrator | 2025-05-25 01:00:47 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:47.883816 | orchestrator | 2025-05-25 01:00:47 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:47.883861 | orchestrator | 2025-05-25 01:00:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:50.933643 | orchestrator | 2025-05-25 01:00:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:50.935266 | orchestrator | 2025-05-25 01:00:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:50.937479 | orchestrator | 2025-05-25 01:00:50 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:50.939426 | orchestrator | 2025-05-25 01:00:50 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:50.939481 | orchestrator | 2025-05-25 01:00:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:53.984886 | orchestrator | 2025-05-25 01:00:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:53.986377 | orchestrator | 2025-05-25 01:00:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:53.987954 | orchestrator | 2025-05-25 01:00:53 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:53.989274 | orchestrator | 2025-05-25 01:00:53 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:53.989305 | orchestrator | 2025-05-25 01:00:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:00:57.034696 | orchestrator | 2025-05-25 01:00:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:00:57.035716 | orchestrator | 2025-05-25 01:00:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:00:57.036862 | orchestrator | 2025-05-25 01:00:57 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:00:57.038951 | orchestrator | 2025-05-25 01:00:57 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:00:57.039071 | orchestrator | 2025-05-25 01:00:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:00.088608 | orchestrator | 2025-05-25 01:01:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:00.090122 | orchestrator | 2025-05-25 01:01:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:00.092599 | orchestrator | 2025-05-25 01:01:00 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:01:00.093317 | orchestrator | 2025-05-25 01:01:00 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:00.093356 | orchestrator | 2025-05-25 01:01:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:03.143275 | orchestrator | 2025-05-25 01:01:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:03.144323 | orchestrator | 2025-05-25 01:01:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:03.145686 | orchestrator | 2025-05-25 01:01:03 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:01:03.147505 | orchestrator | 2025-05-25 01:01:03 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:03.147598 | orchestrator | 2025-05-25 01:01:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:06.201966 | orchestrator | 2025-05-25 01:01:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:06.203573 | orchestrator | 2025-05-25 01:01:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:06.206510 | orchestrator | 2025-05-25 01:01:06 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:01:06.208097 | orchestrator | 2025-05-25 01:01:06 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:06.208118 | orchestrator | 2025-05-25 01:01:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:09.264149 | orchestrator | 2025-05-25 01:01:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:09.265839 | orchestrator | 2025-05-25 01:01:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:09.267390 | orchestrator | 2025-05-25 01:01:09 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:01:09.268934 | orchestrator | 2025-05-25 01:01:09 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:09.268964 | orchestrator | 2025-05-25 01:01:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:12.309268 | orchestrator | 2025-05-25 01:01:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:12.310224 | orchestrator | 2025-05-25 01:01:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:12.311124 | orchestrator | 2025-05-25 01:01:12 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:01:12.312610 | orchestrator | 2025-05-25 01:01:12 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:12.312655 | orchestrator | 2025-05-25 01:01:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:15.369467 | orchestrator | 2025-05-25 01:01:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:15.370135 | orchestrator | 2025-05-25 01:01:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:15.371414 | orchestrator | 2025-05-25 01:01:15 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:01:15.372649 | orchestrator | 2025-05-25 01:01:15 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:15.372693 | orchestrator | 2025-05-25 01:01:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:18.416836 | orchestrator | 2025-05-25 01:01:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:18.418272 | orchestrator | 2025-05-25 01:01:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:18.419700 | orchestrator | 2025-05-25 01:01:18 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:01:18.420766 | orchestrator | 2025-05-25 01:01:18 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:18.420814 | orchestrator | 2025-05-25 01:01:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:21.469424 | orchestrator | 2025-05-25 01:01:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:21.471141 | orchestrator | 2025-05-25 01:01:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:21.472293 | orchestrator | 2025-05-25 01:01:21 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:01:21.473562 | orchestrator | 2025-05-25 01:01:21 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:21.473581 | orchestrator | 2025-05-25 01:01:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:24.520960 | orchestrator | 2025-05-25 01:01:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:24.522362 | orchestrator | 2025-05-25 01:01:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:24.524339 | orchestrator | 2025-05-25 01:01:24 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:01:24.526177 | orchestrator | 2025-05-25 01:01:24 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:24.526213 | orchestrator | 2025-05-25 01:01:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:27.566576 | orchestrator | 2025-05-25 01:01:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:27.567917 | orchestrator | 2025-05-25 01:01:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:27.569344 | orchestrator | 2025-05-25 01:01:27 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:01:27.570412 | orchestrator | 2025-05-25 01:01:27 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:27.570500 | orchestrator | 2025-05-25 01:01:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:30.623530 | orchestrator | 2025-05-25 01:01:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:30.625080 | orchestrator | 2025-05-25 01:01:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:30.628138 | orchestrator | 2025-05-25 01:01:30 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state STARTED 2025-05-25 01:01:30.629602 | orchestrator | 2025-05-25 01:01:30 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:30.629915 | orchestrator | 2025-05-25 01:01:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:33.680848 | orchestrator | 2025-05-25 01:01:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:33.681280 | orchestrator | 2025-05-25 01:01:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:33.683668 | orchestrator | 2025-05-25 01:01:33 | INFO  | Task 8739cd69-e0d1-425b-84af-1e9098dd7ae2 is in state SUCCESS 2025-05-25 01:01:33.685272 | orchestrator | 2025-05-25 01:01:33.685315 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-25 01:01:33.685328 | orchestrator | 2025-05-25 01:01:33.685340 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-25 01:01:33.685351 | orchestrator | 2025-05-25 01:01:33.685454 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-25 01:01:33.685470 | orchestrator | Sunday 25 May 2025 00:59:30 +0000 (0:00:01.131) 0:00:01.131 ************ 2025-05-25 01:01:33.685482 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 01:01:33.685495 | orchestrator | 2025-05-25 01:01:33.685506 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-25 01:01:33.685539 | orchestrator | Sunday 25 May 2025 00:59:31 +0000 (0:00:00.518) 0:00:01.650 ************ 2025-05-25 01:01:33.685552 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-25 01:01:33.685564 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-25 01:01:33.685604 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-25 01:01:33.685616 | orchestrator | 2025-05-25 01:01:33.685627 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-25 01:01:33.685638 | orchestrator | Sunday 25 May 2025 00:59:32 +0000 (0:00:00.851) 0:00:02.502 ************ 2025-05-25 01:01:33.685648 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 01:01:33.685660 | orchestrator | 2025-05-25 01:01:33.685670 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-25 01:01:33.685681 | orchestrator | Sunday 25 May 2025 00:59:32 +0000 (0:00:00.682) 0:00:03.184 ************ 2025-05-25 01:01:33.685692 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.685703 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.685714 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.685724 | orchestrator | 2025-05-25 01:01:33.685735 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-25 01:01:33.685746 | orchestrator | Sunday 25 May 2025 00:59:33 +0000 (0:00:00.636) 0:00:03.821 ************ 2025-05-25 01:01:33.685757 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.685768 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.685779 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.685790 | orchestrator | 2025-05-25 01:01:33.685801 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-25 01:01:33.685812 | orchestrator | Sunday 25 May 2025 00:59:33 +0000 (0:00:00.299) 0:00:04.121 ************ 2025-05-25 01:01:33.685823 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.685833 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.685847 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.685860 | orchestrator | 2025-05-25 01:01:33.685872 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-25 01:01:33.685885 | orchestrator | Sunday 25 May 2025 00:59:34 +0000 (0:00:00.813) 0:00:04.934 ************ 2025-05-25 01:01:33.685897 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.685910 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.685923 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.685936 | orchestrator | 2025-05-25 01:01:33.685949 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-25 01:01:33.685962 | orchestrator | Sunday 25 May 2025 00:59:34 +0000 (0:00:00.306) 0:00:05.241 ************ 2025-05-25 01:01:33.685975 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.685988 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.686000 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.686011 | orchestrator | 2025-05-25 01:01:33.686087 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-25 01:01:33.686099 | orchestrator | Sunday 25 May 2025 00:59:35 +0000 (0:00:00.338) 0:00:05.580 ************ 2025-05-25 01:01:33.686111 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.686122 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.686132 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.686143 | orchestrator | 2025-05-25 01:01:33.686154 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-25 01:01:33.686165 | orchestrator | Sunday 25 May 2025 00:59:35 +0000 (0:00:00.320) 0:00:05.900 ************ 2025-05-25 01:01:33.686176 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.686189 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.686199 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.686210 | orchestrator | 2025-05-25 01:01:33.686221 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-25 01:01:33.686232 | orchestrator | Sunday 25 May 2025 00:59:36 +0000 (0:00:00.480) 0:00:06.381 ************ 2025-05-25 01:01:33.686255 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.686266 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.686277 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.686287 | orchestrator | 2025-05-25 01:01:33.686298 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-25 01:01:33.686309 | orchestrator | Sunday 25 May 2025 00:59:36 +0000 (0:00:00.277) 0:00:06.658 ************ 2025-05-25 01:01:33.686320 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-25 01:01:33.686331 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 01:01:33.686342 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 01:01:33.686352 | orchestrator | 2025-05-25 01:01:33.686363 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-25 01:01:33.686374 | orchestrator | Sunday 25 May 2025 00:59:36 +0000 (0:00:00.638) 0:00:07.296 ************ 2025-05-25 01:01:33.686385 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.686395 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.686406 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.686417 | orchestrator | 2025-05-25 01:01:33.686427 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-25 01:01:33.686438 | orchestrator | Sunday 25 May 2025 00:59:37 +0000 (0:00:00.428) 0:00:07.725 ************ 2025-05-25 01:01:33.686464 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-25 01:01:33.686475 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 01:01:33.686493 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 01:01:33.686504 | orchestrator | 2025-05-25 01:01:33.686515 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-25 01:01:33.686526 | orchestrator | Sunday 25 May 2025 00:59:39 +0000 (0:00:02.248) 0:00:09.973 ************ 2025-05-25 01:01:33.686537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 01:01:33.686548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 01:01:33.686559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 01:01:33.686570 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.686580 | orchestrator | 2025-05-25 01:01:33.686591 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-25 01:01:33.686602 | orchestrator | Sunday 25 May 2025 00:59:40 +0000 (0:00:00.431) 0:00:10.405 ************ 2025-05-25 01:01:33.686615 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-25 01:01:33.686630 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-25 01:01:33.686641 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-25 01:01:33.686653 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.686663 | orchestrator | 2025-05-25 01:01:33.686674 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-25 01:01:33.686685 | orchestrator | Sunday 25 May 2025 00:59:40 +0000 (0:00:00.660) 0:00:11.065 ************ 2025-05-25 01:01:33.686699 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 01:01:33.686723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 01:01:33.686734 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 01:01:33.686746 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.686757 | orchestrator | 2025-05-25 01:01:33.686767 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-25 01:01:33.686778 | orchestrator | Sunday 25 May 2025 00:59:40 +0000 (0:00:00.159) 0:00:11.224 ************ 2025-05-25 01:01:33.686792 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '93d61f55e986', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-25 00:59:38.238621', 'end': '2025-05-25 00:59:38.279850', 'delta': '0:00:00.041229', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['93d61f55e986'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-25 01:01:33.686849 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '567d1c362456', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-25 00:59:38.820739', 'end': '2025-05-25 00:59:38.867025', 'delta': '0:00:00.046286', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['567d1c362456'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-25 01:01:33.686864 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'a7349bafd4ea', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-25 00:59:39.350936', 'end': '2025-05-25 00:59:39.392087', 'delta': '0:00:00.041151', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a7349bafd4ea'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-25 01:01:33.686876 | orchestrator | 2025-05-25 01:01:33.686887 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-25 01:01:33.686898 | orchestrator | Sunday 25 May 2025 00:59:41 +0000 (0:00:00.218) 0:00:11.443 ************ 2025-05-25 01:01:33.686909 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.686920 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.686938 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.686949 | orchestrator | 2025-05-25 01:01:33.686960 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-25 01:01:33.686971 | orchestrator | Sunday 25 May 2025 00:59:41 +0000 (0:00:00.438) 0:00:11.881 ************ 2025-05-25 01:01:33.686981 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-25 01:01:33.686992 | orchestrator | 2025-05-25 01:01:33.687003 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-25 01:01:33.687013 | orchestrator | Sunday 25 May 2025 00:59:42 +0000 (0:00:01.287) 0:00:13.169 ************ 2025-05-25 01:01:33.687024 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.687083 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.687096 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.687106 | orchestrator | 2025-05-25 01:01:33.687117 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-25 01:01:33.687128 | orchestrator | Sunday 25 May 2025 00:59:43 +0000 (0:00:00.441) 0:00:13.611 ************ 2025-05-25 01:01:33.687139 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.687150 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.687161 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.687172 | orchestrator | 2025-05-25 01:01:33.687183 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-25 01:01:33.687194 | orchestrator | Sunday 25 May 2025 00:59:43 +0000 (0:00:00.400) 0:00:14.011 ************ 2025-05-25 01:01:33.687205 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.687215 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.687226 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.687237 | orchestrator | 2025-05-25 01:01:33.687248 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-25 01:01:33.687259 | orchestrator | Sunday 25 May 2025 00:59:43 +0000 (0:00:00.281) 0:00:14.292 ************ 2025-05-25 01:01:33.687270 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.687281 | orchestrator | 2025-05-25 01:01:33.687291 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-25 01:01:33.687303 | orchestrator | Sunday 25 May 2025 00:59:44 +0000 (0:00:00.118) 0:00:14.411 ************ 2025-05-25 01:01:33.687313 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.687324 | orchestrator | 2025-05-25 01:01:33.687335 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-25 01:01:33.687346 | orchestrator | Sunday 25 May 2025 00:59:44 +0000 (0:00:00.220) 0:00:14.631 ************ 2025-05-25 01:01:33.687357 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.687368 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.687379 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.687390 | orchestrator | 2025-05-25 01:01:33.687400 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-25 01:01:33.687411 | orchestrator | Sunday 25 May 2025 00:59:44 +0000 (0:00:00.453) 0:00:15.085 ************ 2025-05-25 01:01:33.687422 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.687433 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.687444 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.687455 | orchestrator | 2025-05-25 01:01:33.687465 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-25 01:01:33.687476 | orchestrator | Sunday 25 May 2025 00:59:45 +0000 (0:00:00.321) 0:00:15.407 ************ 2025-05-25 01:01:33.687487 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.687498 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.687509 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.687520 | orchestrator | 2025-05-25 01:01:33.687531 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-25 01:01:33.687542 | orchestrator | Sunday 25 May 2025 00:59:45 +0000 (0:00:00.331) 0:00:15.738 ************ 2025-05-25 01:01:33.687553 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.687564 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.687589 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.687600 | orchestrator | 2025-05-25 01:01:33.687611 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-25 01:01:33.687622 | orchestrator | Sunday 25 May 2025 00:59:45 +0000 (0:00:00.300) 0:00:16.039 ************ 2025-05-25 01:01:33.687633 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.687649 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.687660 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.687671 | orchestrator | 2025-05-25 01:01:33.687682 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-25 01:01:33.687693 | orchestrator | Sunday 25 May 2025 00:59:46 +0000 (0:00:00.452) 0:00:16.491 ************ 2025-05-25 01:01:33.687704 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.687715 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.687725 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.687736 | orchestrator | 2025-05-25 01:01:33.687747 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-25 01:01:33.687758 | orchestrator | Sunday 25 May 2025 00:59:46 +0000 (0:00:00.333) 0:00:16.825 ************ 2025-05-25 01:01:33.687769 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.687779 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.687790 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.687801 | orchestrator | 2025-05-25 01:01:33.687812 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-25 01:01:33.687823 | orchestrator | Sunday 25 May 2025 00:59:46 +0000 (0:00:00.307) 0:00:17.132 ************ 2025-05-25 01:01:33.687835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91dc6ac0--e554--5716--a575--6858f2de7d62-osd--block--91dc6ac0--e554--5716--a575--6858f2de7d62', 'dm-uuid-LVM-d2a8l8sOV5VaZWIt9G7ovWvisC1s7hAtkaOlpTjYrMuvpi5viCvgA2HhfZ5QURWB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.687849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a344b0dc--179a--5809--8fe1--9e4cbc2dd42d-osd--block--a344b0dc--179a--5809--8fe1--9e4cbc2dd42d', 'dm-uuid-LVM-bgBKeIFkqQz7hEmPHFLD4eddfhEdiUhgcC2wMt1sJ6yrzyd9TmpOW67kWK8imV82'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.687860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.687872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.687884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.687902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.687925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.687938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--86509461--9ff7--5f8d--a545--2dedda0a1471-osd--block--86509461--9ff7--5f8d--a545--2dedda0a1471', 'dm-uuid-LVM-hwcAG3bjg1BWKJHBga7T8xw0rHFgX4cD6LJQfnPgjPrIDFi2RgRkiw5AKUlsTRZt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.687949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.687961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1f6e0dcd--8614--5501--94b8--6b816e10f3a3-osd--block--1f6e0dcd--8614--5501--94b8--6b816e10f3a3', 'dm-uuid-LVM-C8PVX8qDQcM983z2QfAqCXD6yhsbuEq55ZrIlDvU6m19z1XleSOVq3exFBZsP3Nb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.687972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.687984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.687995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b', 'scsi-SQEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb7c7597-082a-4802-b2b2-08165cf24c9b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--91dc6ac0--e554--5716--a575--6858f2de7d62-osd--block--91dc6ac0--e554--5716--a575--6858f2de7d62'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xHqAVA-Cejf-7gr9-zFPT-wav0-5qxU-8HJN3e', 'scsi-0QEMU_QEMU_HARDDISK_b4cdb2bf-93fc-4f18-bc4f-5ab68c384bd6', 'scsi-SQEMU_QEMU_HARDDISK_b4cdb2bf-93fc-4f18-bc4f-5ab68c384bd6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a344b0dc--179a--5809--8fe1--9e4cbc2dd42d-osd--block--a344b0dc--179a--5809--8fe1--9e4cbc2dd42d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HflusL-wpGE-znvH-o1Mb-iAxF-aSGd-BxXSJA', 'scsi-0QEMU_QEMU_HARDDISK_5d6b2858-a2bf-4730-a36e-7c509d6038b8', 'scsi-SQEMU_QEMU_HARDDISK_5d6b2858-a2bf-4730-a36e-7c509d6038b8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f90c35ea-44f5-4677-8ded-e7e6ddf8d55d', 'scsi-SQEMU_QEMU_HARDDISK_f90c35ea-44f5-4677-8ded-e7e6ddf8d55d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-00-01-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688224 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.688235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f34e313d--bca1--5ff8--8346--de91d98588f2-osd--block--f34e313d--bca1--5ff8--8346--de91d98588f2', 'dm-uuid-LVM-qzCBpHI6u1zR1tGPZw4KwHds2G6YtCfIbBzXT9BeEmg8kAhbPEy11F8gyaE9dmNs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357', 'scsi-SQEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part1', 'scsi-SQEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part14', 'scsi-SQEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part15', 'scsi-SQEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part16', 'scsi-SQEMU_QEMU_HARDDISK_837412a5-fe4a-44e8-b41a-275c23b45357-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a31c7786--f287--566f--81cf--65786b8dbda6-osd--block--a31c7786--f287--566f--81cf--65786b8dbda6', 'dm-uuid-LVM-jeE7HgaYYNiTQ2Cdr5ptN5Bi6GUMjUK5bGPqGwiAscPmEeOlmdmjysTWSdrPwyUC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--86509461--9ff7--5f8d--a545--2dedda0a1471-osd--block--86509461--9ff7--5f8d--a545--2dedda0a1471'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oHsvLZ-eiHr-bhxG-uPy5-zdll-keK3-s9azWZ', 'scsi-0QEMU_QEMU_HARDDISK_a7a2bb5e-544e-42c6-9dad-0ece7cbc632c', 'scsi-SQEMU_QEMU_HARDDISK_a7a2bb5e-544e-42c6-9dad-0ece7cbc632c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1f6e0dcd--8614--5501--94b8--6b816e10f3a3-osd--block--1f6e0dcd--8614--5501--94b8--6b816e10f3a3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lsgh6i-v8WU-otvP-1ReA-wzwN-nPO4-etH5Z4', 'scsi-0QEMU_QEMU_HARDDISK_45989edd-037d-47c1-af48-ae55f96e814d', 'scsi-SQEMU_QEMU_HARDDISK_45989edd-037d-47c1-af48-ae55f96e814d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688362 | orchestrator | skipping: [testbed-node-5] => 2025-05-25 01:01:33 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:01:33.688380 | orchestrator | (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00903628-efdf-425a-bac1-d89af04936e9', 'scsi-SQEMU_QEMU_HARDDISK_00903628-efdf-425a-bac1-d89af04936e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688428 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.688439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:01:33.688516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1', 'scsi-SQEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e42b604-2874-4965-a971-13f8550546b1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f34e313d--bca1--5ff8--8346--de91d98588f2-osd--block--f34e313d--bca1--5ff8--8346--de91d98588f2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CITy27-5Akz-gmxl-ss4O-c7b5-eSzN-ksQvPt', 'scsi-0QEMU_QEMU_HARDDISK_5104b556-d7c3-42e9-9230-39ae2abd74e9', 'scsi-SQEMU_QEMU_HARDDISK_5104b556-d7c3-42e9-9230-39ae2abd74e9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a31c7786--f287--566f--81cf--65786b8dbda6-osd--block--a31c7786--f287--566f--81cf--65786b8dbda6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1bhDeb-HRF6-15Pb-u6KA-690z-4Xkb-oXjYuE', 'scsi-0QEMU_QEMU_HARDDISK_a4234bd8-7c33-4d3a-bb78-5919196abab5', 'scsi-SQEMU_QEMU_HARDDISK_a4234bd8-7c33-4d3a-bb78-5919196abab5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70c7a39a-01cf-4431-b65e-7bc8a8e29825', 'scsi-SQEMU_QEMU_HARDDISK_70c7a39a-01cf-4431-b65e-7bc8a8e29825'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:01:33.688595 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.688606 | orchestrator | 2025-05-25 01:01:33.688617 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-25 01:01:33.688628 | orchestrator | Sunday 25 May 2025 00:59:47 +0000 (0:00:00.545) 0:00:17.678 ************ 2025-05-25 01:01:33.688639 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-25 01:01:33.688650 | orchestrator | 2025-05-25 01:01:33.688661 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-25 01:01:33.688672 | orchestrator | Sunday 25 May 2025 00:59:48 +0000 (0:00:01.214) 0:00:18.893 ************ 2025-05-25 01:01:33.688683 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.688694 | orchestrator | 2025-05-25 01:01:33.688705 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-25 01:01:33.688716 | orchestrator | Sunday 25 May 2025 00:59:48 +0000 (0:00:00.293) 0:00:19.187 ************ 2025-05-25 01:01:33.688726 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.688737 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.688755 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.688765 | orchestrator | 2025-05-25 01:01:33.688776 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-25 01:01:33.688787 | orchestrator | Sunday 25 May 2025 00:59:49 +0000 (0:00:00.371) 0:00:19.559 ************ 2025-05-25 01:01:33.688798 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.688808 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.688819 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.688830 | orchestrator | 2025-05-25 01:01:33.688841 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-25 01:01:33.688852 | orchestrator | Sunday 25 May 2025 00:59:49 +0000 (0:00:00.646) 0:00:20.205 ************ 2025-05-25 01:01:33.688863 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.688873 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.688884 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.688895 | orchestrator | 2025-05-25 01:01:33.688906 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-25 01:01:33.688916 | orchestrator | Sunday 25 May 2025 00:59:50 +0000 (0:00:00.322) 0:00:20.528 ************ 2025-05-25 01:01:33.688927 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.688938 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.688948 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.688959 | orchestrator | 2025-05-25 01:01:33.688970 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-25 01:01:33.688981 | orchestrator | Sunday 25 May 2025 00:59:51 +0000 (0:00:00.838) 0:00:21.366 ************ 2025-05-25 01:01:33.688992 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.689002 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.689013 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.689024 | orchestrator | 2025-05-25 01:01:33.689051 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-25 01:01:33.689062 | orchestrator | Sunday 25 May 2025 00:59:51 +0000 (0:00:00.311) 0:00:21.678 ************ 2025-05-25 01:01:33.689073 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.689084 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.689095 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.689105 | orchestrator | 2025-05-25 01:01:33.689116 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-25 01:01:33.689127 | orchestrator | Sunday 25 May 2025 00:59:51 +0000 (0:00:00.425) 0:00:22.104 ************ 2025-05-25 01:01:33.689138 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.689148 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.689159 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.689170 | orchestrator | 2025-05-25 01:01:33.689180 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-25 01:01:33.689191 | orchestrator | Sunday 25 May 2025 00:59:52 +0000 (0:00:00.295) 0:00:22.399 ************ 2025-05-25 01:01:33.689202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 01:01:33.689213 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-25 01:01:33.689223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 01:01:33.689234 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-25 01:01:33.689245 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-25 01:01:33.689255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 01:01:33.689266 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.689277 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-25 01:01:33.689288 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-25 01:01:33.689298 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.689310 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-25 01:01:33.689326 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.689337 | orchestrator | 2025-05-25 01:01:33.689348 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-25 01:01:33.689366 | orchestrator | Sunday 25 May 2025 00:59:52 +0000 (0:00:00.854) 0:00:23.253 ************ 2025-05-25 01:01:33.689381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 01:01:33.689393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 01:01:33.689403 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-25 01:01:33.689414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 01:01:33.689425 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.689435 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-25 01:01:33.689446 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-25 01:01:33.689457 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-25 01:01:33.689467 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.689478 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-25 01:01:33.689489 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-25 01:01:33.689499 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.689510 | orchestrator | 2025-05-25 01:01:33.689521 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-25 01:01:33.689532 | orchestrator | Sunday 25 May 2025 00:59:53 +0000 (0:00:00.644) 0:00:23.897 ************ 2025-05-25 01:01:33.689543 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-25 01:01:33.689554 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-25 01:01:33.689564 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-25 01:01:33.689575 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-25 01:01:33.689586 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-25 01:01:33.689596 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-25 01:01:33.689607 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-25 01:01:33.689618 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-25 01:01:33.689629 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-25 01:01:33.689639 | orchestrator | 2025-05-25 01:01:33.689650 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-25 01:01:33.689661 | orchestrator | Sunday 25 May 2025 00:59:54 +0000 (0:00:01.370) 0:00:25.268 ************ 2025-05-25 01:01:33.689672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 01:01:33.689682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 01:01:33.689693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 01:01:33.689704 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-25 01:01:33.689714 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-25 01:01:33.689725 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-25 01:01:33.689736 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.689746 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.689757 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-25 01:01:33.689790 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-25 01:01:33.689801 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-25 01:01:33.689812 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.689823 | orchestrator | 2025-05-25 01:01:33.689834 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-25 01:01:33.689845 | orchestrator | Sunday 25 May 2025 00:59:55 +0000 (0:00:00.575) 0:00:25.843 ************ 2025-05-25 01:01:33.689855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-25 01:01:33.689866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-25 01:01:33.689877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-25 01:01:33.689888 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-25 01:01:33.689905 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.689916 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-25 01:01:33.689927 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-25 01:01:33.689938 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.689948 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-25 01:01:33.689959 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-25 01:01:33.689970 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-25 01:01:33.689981 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.689991 | orchestrator | 2025-05-25 01:01:33.690002 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-25 01:01:33.690013 | orchestrator | Sunday 25 May 2025 00:59:55 +0000 (0:00:00.387) 0:00:26.231 ************ 2025-05-25 01:01:33.690074 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-25 01:01:33.690086 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-25 01:01:33.690097 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-25 01:01:33.690108 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-25 01:01:33.690119 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-25 01:01:33.690129 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-25 01:01:33.690140 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.690158 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.690169 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-25 01:01:33.690180 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-25 01:01:33.690196 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-25 01:01:33.690207 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.690218 | orchestrator | 2025-05-25 01:01:33.690229 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-25 01:01:33.690240 | orchestrator | Sunday 25 May 2025 00:59:56 +0000 (0:00:00.381) 0:00:26.613 ************ 2025-05-25 01:01:33.690251 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 01:01:33.690262 | orchestrator | 2025-05-25 01:01:33.690274 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-25 01:01:33.690285 | orchestrator | Sunday 25 May 2025 00:59:57 +0000 (0:00:00.700) 0:00:27.314 ************ 2025-05-25 01:01:33.690296 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.690306 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.690317 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.690328 | orchestrator | 2025-05-25 01:01:33.690339 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-25 01:01:33.690350 | orchestrator | Sunday 25 May 2025 00:59:57 +0000 (0:00:00.320) 0:00:27.634 ************ 2025-05-25 01:01:33.690360 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.690371 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.690382 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.690392 | orchestrator | 2025-05-25 01:01:33.690403 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-25 01:01:33.690414 | orchestrator | Sunday 25 May 2025 00:59:57 +0000 (0:00:00.303) 0:00:27.937 ************ 2025-05-25 01:01:33.690425 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.690436 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.690453 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.690464 | orchestrator | 2025-05-25 01:01:33.690475 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-25 01:01:33.690486 | orchestrator | Sunday 25 May 2025 00:59:57 +0000 (0:00:00.293) 0:00:28.230 ************ 2025-05-25 01:01:33.690496 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.690507 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.690518 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.690529 | orchestrator | 2025-05-25 01:01:33.690539 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-25 01:01:33.690550 | orchestrator | Sunday 25 May 2025 00:59:58 +0000 (0:00:00.579) 0:00:28.810 ************ 2025-05-25 01:01:33.690561 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 01:01:33.690571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 01:01:33.690582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 01:01:33.690593 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.690604 | orchestrator | 2025-05-25 01:01:33.690614 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-25 01:01:33.690625 | orchestrator | Sunday 25 May 2025 00:59:58 +0000 (0:00:00.391) 0:00:29.201 ************ 2025-05-25 01:01:33.690636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 01:01:33.690647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 01:01:33.690657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 01:01:33.690668 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.690679 | orchestrator | 2025-05-25 01:01:33.690690 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-25 01:01:33.690701 | orchestrator | Sunday 25 May 2025 00:59:59 +0000 (0:00:00.367) 0:00:29.568 ************ 2025-05-25 01:01:33.690711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 01:01:33.690722 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 01:01:33.690733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 01:01:33.690743 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.690754 | orchestrator | 2025-05-25 01:01:33.690765 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 01:01:33.690775 | orchestrator | Sunday 25 May 2025 00:59:59 +0000 (0:00:00.364) 0:00:29.933 ************ 2025-05-25 01:01:33.690786 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:01:33.690797 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:01:33.690808 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:01:33.690819 | orchestrator | 2025-05-25 01:01:33.690829 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-25 01:01:33.690840 | orchestrator | Sunday 25 May 2025 00:59:59 +0000 (0:00:00.304) 0:00:30.238 ************ 2025-05-25 01:01:33.690851 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-25 01:01:33.690862 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-25 01:01:33.690872 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-25 01:01:33.690883 | orchestrator | 2025-05-25 01:01:33.690894 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-25 01:01:33.690905 | orchestrator | Sunday 25 May 2025 01:00:00 +0000 (0:00:00.504) 0:00:30.742 ************ 2025-05-25 01:01:33.690916 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.690927 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.690937 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.690948 | orchestrator | 2025-05-25 01:01:33.690959 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-25 01:01:33.690969 | orchestrator | Sunday 25 May 2025 01:00:00 +0000 (0:00:00.467) 0:00:31.210 ************ 2025-05-25 01:01:33.690980 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.690991 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.691002 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.691023 | orchestrator | 2025-05-25 01:01:33.691056 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-25 01:01:33.691068 | orchestrator | Sunday 25 May 2025 01:00:01 +0000 (0:00:00.325) 0:00:31.536 ************ 2025-05-25 01:01:33.691079 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-25 01:01:33.691089 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.691105 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-25 01:01:33.691116 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.691126 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-25 01:01:33.691137 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.691148 | orchestrator | 2025-05-25 01:01:33.691159 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-25 01:01:33.691170 | orchestrator | Sunday 25 May 2025 01:00:01 +0000 (0:00:00.408) 0:00:31.945 ************ 2025-05-25 01:01:33.691181 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-25 01:01:33.691192 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.691202 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-25 01:01:33.691213 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.691224 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-25 01:01:33.691235 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.691246 | orchestrator | 2025-05-25 01:01:33.691257 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-25 01:01:33.691268 | orchestrator | Sunday 25 May 2025 01:00:01 +0000 (0:00:00.291) 0:00:32.236 ************ 2025-05-25 01:01:33.691278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-25 01:01:33.691289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-25 01:01:33.691300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-25 01:01:33.691310 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-25 01:01:33.691321 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.691332 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-25 01:01:33.691342 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-25 01:01:33.691353 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-25 01:01:33.691363 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.691374 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-25 01:01:33.691385 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-25 01:01:33.691396 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.691407 | orchestrator | 2025-05-25 01:01:33.691418 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-25 01:01:33.691428 | orchestrator | Sunday 25 May 2025 01:00:02 +0000 (0:00:00.812) 0:00:33.049 ************ 2025-05-25 01:01:33.691439 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.691450 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.691460 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:01:33.691471 | orchestrator | 2025-05-25 01:01:33.691482 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-25 01:01:33.691493 | orchestrator | Sunday 25 May 2025 01:00:03 +0000 (0:00:00.283) 0:00:33.332 ************ 2025-05-25 01:01:33.691503 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-25 01:01:33.691514 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 01:01:33.691525 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 01:01:33.691536 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-25 01:01:33.691555 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-25 01:01:33.691566 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-25 01:01:33.691576 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-25 01:01:33.691587 | orchestrator | 2025-05-25 01:01:33.691598 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-25 01:01:33.691609 | orchestrator | Sunday 25 May 2025 01:00:04 +0000 (0:00:01.001) 0:00:34.334 ************ 2025-05-25 01:01:33.691619 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-25 01:01:33.691630 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 01:01:33.691641 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 01:01:33.691652 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-25 01:01:33.691662 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-25 01:01:33.691673 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-25 01:01:33.691684 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-25 01:01:33.691695 | orchestrator | 2025-05-25 01:01:33.691705 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-25 01:01:33.691716 | orchestrator | Sunday 25 May 2025 01:00:05 +0000 (0:00:01.693) 0:00:36.027 ************ 2025-05-25 01:01:33.691727 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:01:33.691738 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:01:33.691748 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-25 01:01:33.691759 | orchestrator | 2025-05-25 01:01:33.691776 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-25 01:01:33.691787 | orchestrator | Sunday 25 May 2025 01:00:06 +0000 (0:00:00.694) 0:00:36.721 ************ 2025-05-25 01:01:33.691805 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 01:01:33.691818 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 01:01:33.691830 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 01:01:33.691841 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 01:01:33.691852 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-25 01:01:33.691863 | orchestrator | 2025-05-25 01:01:33.691874 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-25 01:01:33.691885 | orchestrator | Sunday 25 May 2025 01:00:43 +0000 (0:00:37.323) 0:01:14.045 ************ 2025-05-25 01:01:33.691896 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.691914 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.691924 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.691935 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.691946 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.691957 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.691967 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-25 01:01:33.691978 | orchestrator | 2025-05-25 01:01:33.691989 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-25 01:01:33.692000 | orchestrator | Sunday 25 May 2025 01:01:03 +0000 (0:00:19.760) 0:01:33.805 ************ 2025-05-25 01:01:33.692011 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.692022 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.692081 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.692095 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.692106 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.692117 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.692128 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-25 01:01:33.692138 | orchestrator | 2025-05-25 01:01:33.692149 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-25 01:01:33.692160 | orchestrator | Sunday 25 May 2025 01:01:13 +0000 (0:00:09.615) 0:01:43.421 ************ 2025-05-25 01:01:33.692171 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.692182 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 01:01:33.692193 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 01:01:33.692204 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.692214 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 01:01:33.692225 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 01:01:33.692236 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.692247 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 01:01:33.692258 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 01:01:33.692276 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.692287 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 01:01:33.692298 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 01:01:33.692315 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.692326 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 01:01:33.692337 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 01:01:33.692347 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-25 01:01:33.692356 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-25 01:01:33.692366 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-25 01:01:33.692375 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-25 01:01:33.692391 | orchestrator | 2025-05-25 01:01:33.692401 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 01:01:33.692411 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-25 01:01:33.692422 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-25 01:01:33.692432 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-25 01:01:33.692441 | orchestrator | 2025-05-25 01:01:33.692451 | orchestrator | 2025-05-25 01:01:33.692460 | orchestrator | 2025-05-25 01:01:33.692470 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 01:01:33.692480 | orchestrator | Sunday 25 May 2025 01:01:31 +0000 (0:00:18.162) 0:02:01.583 ************ 2025-05-25 01:01:33.692489 | orchestrator | =============================================================================== 2025-05-25 01:01:33.692499 | orchestrator | create openstack pool(s) ----------------------------------------------- 37.32s 2025-05-25 01:01:33.692509 | orchestrator | generate keys ---------------------------------------------------------- 19.76s 2025-05-25 01:01:33.692518 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.16s 2025-05-25 01:01:33.692528 | orchestrator | get keys from monitors -------------------------------------------------- 9.62s 2025-05-25 01:01:33.692537 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.25s 2025-05-25 01:01:33.692547 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.69s 2025-05-25 01:01:33.692556 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.37s 2025-05-25 01:01:33.692566 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.29s 2025-05-25 01:01:33.692575 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.21s 2025-05-25 01:01:33.692585 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.00s 2025-05-25 01:01:33.692594 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.85s 2025-05-25 01:01:33.692604 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.85s 2025-05-25 01:01:33.692614 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.84s 2025-05-25 01:01:33.692623 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.81s 2025-05-25 01:01:33.692633 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.81s 2025-05-25 01:01:33.692643 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.70s 2025-05-25 01:01:33.692653 | orchestrator | Include tasks from the ceph-osd role ------------------------------------ 0.69s 2025-05-25 01:01:33.692662 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.68s 2025-05-25 01:01:33.692672 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.66s 2025-05-25 01:01:33.692681 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.65s 2025-05-25 01:01:33.692691 | orchestrator | 2025-05-25 01:01:33 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:33.692784 | orchestrator | 2025-05-25 01:01:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:36.741140 | orchestrator | 2025-05-25 01:01:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:36.742940 | orchestrator | 2025-05-25 01:01:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:36.743828 | orchestrator | 2025-05-25 01:01:36 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:01:36.745658 | orchestrator | 2025-05-25 01:01:36 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:36.745708 | orchestrator | 2025-05-25 01:01:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:39.790760 | orchestrator | 2025-05-25 01:01:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:39.792723 | orchestrator | 2025-05-25 01:01:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:39.795446 | orchestrator | 2025-05-25 01:01:39 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:01:39.796944 | orchestrator | 2025-05-25 01:01:39 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:39.797544 | orchestrator | 2025-05-25 01:01:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:42.853607 | orchestrator | 2025-05-25 01:01:42 | INFO  | Task 9b8c5630-8cfe-4bc5-9eca-f88509fdd295 is in state STARTED 2025-05-25 01:01:42.855401 | orchestrator | 2025-05-25 01:01:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:42.857700 | orchestrator | 2025-05-25 01:01:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:42.859960 | orchestrator | 2025-05-25 01:01:42 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:01:42.861644 | orchestrator | 2025-05-25 01:01:42 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:42.861883 | orchestrator | 2025-05-25 01:01:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:45.917165 | orchestrator | 2025-05-25 01:01:45 | INFO  | Task 9b8c5630-8cfe-4bc5-9eca-f88509fdd295 is in state STARTED 2025-05-25 01:01:45.917852 | orchestrator | 2025-05-25 01:01:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:45.918968 | orchestrator | 2025-05-25 01:01:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:45.920184 | orchestrator | 2025-05-25 01:01:45 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:01:45.921425 | orchestrator | 2025-05-25 01:01:45 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:45.921452 | orchestrator | 2025-05-25 01:01:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:48.971618 | orchestrator | 2025-05-25 01:01:48 | INFO  | Task 9b8c5630-8cfe-4bc5-9eca-f88509fdd295 is in state STARTED 2025-05-25 01:01:48.973437 | orchestrator | 2025-05-25 01:01:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:48.975457 | orchestrator | 2025-05-25 01:01:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:48.978721 | orchestrator | 2025-05-25 01:01:48 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:01:48.981117 | orchestrator | 2025-05-25 01:01:48 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:48.981631 | orchestrator | 2025-05-25 01:01:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:52.032955 | orchestrator | 2025-05-25 01:01:52 | INFO  | Task 9b8c5630-8cfe-4bc5-9eca-f88509fdd295 is in state STARTED 2025-05-25 01:01:52.033608 | orchestrator | 2025-05-25 01:01:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:52.034560 | orchestrator | 2025-05-25 01:01:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:52.035900 | orchestrator | 2025-05-25 01:01:52 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:01:52.036590 | orchestrator | 2025-05-25 01:01:52 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:52.036627 | orchestrator | 2025-05-25 01:01:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:55.088439 | orchestrator | 2025-05-25 01:01:55 | INFO  | Task 9b8c5630-8cfe-4bc5-9eca-f88509fdd295 is in state STARTED 2025-05-25 01:01:55.090150 | orchestrator | 2025-05-25 01:01:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:55.091587 | orchestrator | 2025-05-25 01:01:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:55.094289 | orchestrator | 2025-05-25 01:01:55 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:01:55.095281 | orchestrator | 2025-05-25 01:01:55 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:55.095310 | orchestrator | 2025-05-25 01:01:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:01:58.146419 | orchestrator | 2025-05-25 01:01:58 | INFO  | Task 9b8c5630-8cfe-4bc5-9eca-f88509fdd295 is in state STARTED 2025-05-25 01:01:58.148271 | orchestrator | 2025-05-25 01:01:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:01:58.150013 | orchestrator | 2025-05-25 01:01:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:01:58.152214 | orchestrator | 2025-05-25 01:01:58 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:01:58.153747 | orchestrator | 2025-05-25 01:01:58 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:01:58.153768 | orchestrator | 2025-05-25 01:01:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:01.207831 | orchestrator | 2025-05-25 01:02:01 | INFO  | Task 9b8c5630-8cfe-4bc5-9eca-f88509fdd295 is in state STARTED 2025-05-25 01:02:01.207939 | orchestrator | 2025-05-25 01:02:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:01.210916 | orchestrator | 2025-05-25 01:02:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:01.213305 | orchestrator | 2025-05-25 01:02:01 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:02:01.215490 | orchestrator | 2025-05-25 01:02:01 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:01.215900 | orchestrator | 2025-05-25 01:02:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:04.270818 | orchestrator | 2025-05-25 01:02:04 | INFO  | Task 9b8c5630-8cfe-4bc5-9eca-f88509fdd295 is in state STARTED 2025-05-25 01:02:04.272908 | orchestrator | 2025-05-25 01:02:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:04.274642 | orchestrator | 2025-05-25 01:02:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:04.276805 | orchestrator | 2025-05-25 01:02:04 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:02:04.278135 | orchestrator | 2025-05-25 01:02:04 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:04.278174 | orchestrator | 2025-05-25 01:02:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:07.326982 | orchestrator | 2025-05-25 01:02:07 | INFO  | Task 9b8c5630-8cfe-4bc5-9eca-f88509fdd295 is in state STARTED 2025-05-25 01:02:07.328748 | orchestrator | 2025-05-25 01:02:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:07.329729 | orchestrator | 2025-05-25 01:02:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:07.331204 | orchestrator | 2025-05-25 01:02:07 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:02:07.332416 | orchestrator | 2025-05-25 01:02:07 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:07.332439 | orchestrator | 2025-05-25 01:02:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:10.377242 | orchestrator | 2025-05-25 01:02:10 | INFO  | Task 9b8c5630-8cfe-4bc5-9eca-f88509fdd295 is in state SUCCESS 2025-05-25 01:02:10.377332 | orchestrator | 2025-05-25 01:02:10.378677 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-25 01:02:10.378708 | orchestrator | 2025-05-25 01:02:10.378717 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-25 01:02:10.378726 | orchestrator | 2025-05-25 01:02:10.378733 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-25 01:02:10.378740 | orchestrator | Sunday 25 May 2025 01:01:43 +0000 (0:00:00.440) 0:00:00.440 ************ 2025-05-25 01:02:10.378748 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-25 01:02:10.378756 | orchestrator | 2025-05-25 01:02:10.378763 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-25 01:02:10.378770 | orchestrator | Sunday 25 May 2025 01:01:43 +0000 (0:00:00.194) 0:00:00.635 ************ 2025-05-25 01:02:10.378778 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 01:02:10.378785 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-25 01:02:10.378791 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-25 01:02:10.378798 | orchestrator | 2025-05-25 01:02:10.378805 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-25 01:02:10.378812 | orchestrator | Sunday 25 May 2025 01:01:44 +0000 (0:00:00.803) 0:00:01.438 ************ 2025-05-25 01:02:10.378819 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-25 01:02:10.378825 | orchestrator | 2025-05-25 01:02:10.378832 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-25 01:02:10.378839 | orchestrator | Sunday 25 May 2025 01:01:44 +0000 (0:00:00.215) 0:00:01.654 ************ 2025-05-25 01:02:10.378845 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.378852 | orchestrator | 2025-05-25 01:02:10.378859 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-25 01:02:10.378866 | orchestrator | Sunday 25 May 2025 01:01:45 +0000 (0:00:00.597) 0:00:02.251 ************ 2025-05-25 01:02:10.378873 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.378880 | orchestrator | 2025-05-25 01:02:10.378886 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-25 01:02:10.378893 | orchestrator | Sunday 25 May 2025 01:01:45 +0000 (0:00:00.119) 0:00:02.371 ************ 2025-05-25 01:02:10.378914 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.378921 | orchestrator | 2025-05-25 01:02:10.378928 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-25 01:02:10.378934 | orchestrator | Sunday 25 May 2025 01:01:45 +0000 (0:00:00.453) 0:00:02.824 ************ 2025-05-25 01:02:10.378941 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.378947 | orchestrator | 2025-05-25 01:02:10.378953 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-25 01:02:10.378960 | orchestrator | Sunday 25 May 2025 01:01:45 +0000 (0:00:00.136) 0:00:02.961 ************ 2025-05-25 01:02:10.378966 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.378973 | orchestrator | 2025-05-25 01:02:10.378980 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-25 01:02:10.378987 | orchestrator | Sunday 25 May 2025 01:01:46 +0000 (0:00:00.127) 0:00:03.089 ************ 2025-05-25 01:02:10.379013 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.379021 | orchestrator | 2025-05-25 01:02:10.379028 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-25 01:02:10.379035 | orchestrator | Sunday 25 May 2025 01:01:46 +0000 (0:00:00.142) 0:00:03.231 ************ 2025-05-25 01:02:10.379042 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379050 | orchestrator | 2025-05-25 01:02:10.379057 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-25 01:02:10.379063 | orchestrator | Sunday 25 May 2025 01:01:46 +0000 (0:00:00.126) 0:00:03.358 ************ 2025-05-25 01:02:10.379098 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.379105 | orchestrator | 2025-05-25 01:02:10.379111 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-25 01:02:10.379118 | orchestrator | Sunday 25 May 2025 01:01:46 +0000 (0:00:00.131) 0:00:03.490 ************ 2025-05-25 01:02:10.379125 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 01:02:10.379132 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 01:02:10.379138 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 01:02:10.379145 | orchestrator | 2025-05-25 01:02:10.379151 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-25 01:02:10.379158 | orchestrator | Sunday 25 May 2025 01:01:47 +0000 (0:00:00.830) 0:00:04.321 ************ 2025-05-25 01:02:10.379166 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.379173 | orchestrator | 2025-05-25 01:02:10.379179 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-25 01:02:10.379186 | orchestrator | Sunday 25 May 2025 01:01:47 +0000 (0:00:00.236) 0:00:04.558 ************ 2025-05-25 01:02:10.379193 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 01:02:10.379199 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 01:02:10.379205 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 01:02:10.379211 | orchestrator | 2025-05-25 01:02:10.379218 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-25 01:02:10.379225 | orchestrator | Sunday 25 May 2025 01:01:49 +0000 (0:00:01.882) 0:00:06.440 ************ 2025-05-25 01:02:10.379231 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 01:02:10.379238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 01:02:10.379245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 01:02:10.379251 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379258 | orchestrator | 2025-05-25 01:02:10.379265 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-25 01:02:10.379281 | orchestrator | Sunday 25 May 2025 01:01:49 +0000 (0:00:00.448) 0:00:06.888 ************ 2025-05-25 01:02:10.379292 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-25 01:02:10.379303 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-25 01:02:10.379310 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-25 01:02:10.379318 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379325 | orchestrator | 2025-05-25 01:02:10.379332 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-25 01:02:10.379340 | orchestrator | Sunday 25 May 2025 01:01:50 +0000 (0:00:00.794) 0:00:07.682 ************ 2025-05-25 01:02:10.379360 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 01:02:10.379374 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 01:02:10.379382 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-25 01:02:10.379391 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379398 | orchestrator | 2025-05-25 01:02:10.379407 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-25 01:02:10.379416 | orchestrator | Sunday 25 May 2025 01:01:50 +0000 (0:00:00.167) 0:00:07.850 ************ 2025-05-25 01:02:10.379425 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '93d61f55e986', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-25 01:01:48.175110', 'end': '2025-05-25 01:01:48.212178', 'delta': '0:00:00.037068', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['93d61f55e986'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-25 01:02:10.379436 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '567d1c362456', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-25 01:01:48.715530', 'end': '2025-05-25 01:01:48.757800', 'delta': '0:00:00.042270', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['567d1c362456'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-25 01:02:10.379450 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'a7349bafd4ea', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-25 01:01:49.257457', 'end': '2025-05-25 01:01:49.301473', 'delta': '0:00:00.044016', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a7349bafd4ea'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-25 01:02:10.379459 | orchestrator | 2025-05-25 01:02:10.379467 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-25 01:02:10.379480 | orchestrator | Sunday 25 May 2025 01:01:51 +0000 (0:00:00.206) 0:00:08.057 ************ 2025-05-25 01:02:10.379601 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.379614 | orchestrator | 2025-05-25 01:02:10.379621 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-25 01:02:10.379628 | orchestrator | Sunday 25 May 2025 01:01:51 +0000 (0:00:00.252) 0:00:08.309 ************ 2025-05-25 01:02:10.379635 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-25 01:02:10.379642 | orchestrator | 2025-05-25 01:02:10.379649 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-25 01:02:10.379656 | orchestrator | Sunday 25 May 2025 01:01:52 +0000 (0:00:01.598) 0:00:09.908 ************ 2025-05-25 01:02:10.379664 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379672 | orchestrator | 2025-05-25 01:02:10.379678 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-25 01:02:10.379690 | orchestrator | Sunday 25 May 2025 01:01:53 +0000 (0:00:00.131) 0:00:10.039 ************ 2025-05-25 01:02:10.379696 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379703 | orchestrator | 2025-05-25 01:02:10.379710 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-25 01:02:10.379717 | orchestrator | Sunday 25 May 2025 01:01:53 +0000 (0:00:00.210) 0:00:10.250 ************ 2025-05-25 01:02:10.379724 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379730 | orchestrator | 2025-05-25 01:02:10.379737 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-25 01:02:10.379744 | orchestrator | Sunday 25 May 2025 01:01:53 +0000 (0:00:00.139) 0:00:10.389 ************ 2025-05-25 01:02:10.379750 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.379756 | orchestrator | 2025-05-25 01:02:10.379763 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-25 01:02:10.379770 | orchestrator | Sunday 25 May 2025 01:01:53 +0000 (0:00:00.124) 0:00:10.513 ************ 2025-05-25 01:02:10.379777 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379783 | orchestrator | 2025-05-25 01:02:10.379790 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-25 01:02:10.379797 | orchestrator | Sunday 25 May 2025 01:01:53 +0000 (0:00:00.199) 0:00:10.713 ************ 2025-05-25 01:02:10.379803 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379810 | orchestrator | 2025-05-25 01:02:10.379817 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-25 01:02:10.379824 | orchestrator | Sunday 25 May 2025 01:01:53 +0000 (0:00:00.125) 0:00:10.838 ************ 2025-05-25 01:02:10.379831 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379837 | orchestrator | 2025-05-25 01:02:10.379844 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-25 01:02:10.379850 | orchestrator | Sunday 25 May 2025 01:01:53 +0000 (0:00:00.136) 0:00:10.974 ************ 2025-05-25 01:02:10.379856 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379863 | orchestrator | 2025-05-25 01:02:10.379870 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-25 01:02:10.379877 | orchestrator | Sunday 25 May 2025 01:01:54 +0000 (0:00:00.130) 0:00:11.105 ************ 2025-05-25 01:02:10.379884 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379890 | orchestrator | 2025-05-25 01:02:10.379897 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-25 01:02:10.379904 | orchestrator | Sunday 25 May 2025 01:01:54 +0000 (0:00:00.157) 0:00:11.262 ************ 2025-05-25 01:02:10.379910 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379917 | orchestrator | 2025-05-25 01:02:10.379924 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-25 01:02:10.379931 | orchestrator | Sunday 25 May 2025 01:01:54 +0000 (0:00:00.137) 0:00:11.400 ************ 2025-05-25 01:02:10.379938 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379944 | orchestrator | 2025-05-25 01:02:10.379951 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-25 01:02:10.379964 | orchestrator | Sunday 25 May 2025 01:01:54 +0000 (0:00:00.316) 0:00:11.716 ************ 2025-05-25 01:02:10.379971 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.379978 | orchestrator | 2025-05-25 01:02:10.379985 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-25 01:02:10.379991 | orchestrator | Sunday 25 May 2025 01:01:54 +0000 (0:00:00.119) 0:00:11.836 ************ 2025-05-25 01:02:10.379999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:02:10.380106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:02:10.380123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:02:10.380131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:02:10.380138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:02:10.380149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:02:10.380156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:02:10.380163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-25 01:02:10.380186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946', 'scsi-SQEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part1', 'scsi-SQEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part14', 'scsi-SQEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part15', 'scsi-SQEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part16', 'scsi-SQEMU_QEMU_HARDDISK_eeee712c-196d-42b2-b707-3a3109b31946-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:02:10.380197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-25-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-25 01:02:10.380204 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.380211 | orchestrator | 2025-05-25 01:02:10.380218 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-25 01:02:10.380227 | orchestrator | Sunday 25 May 2025 01:01:55 +0000 (0:00:00.280) 0:00:12.116 ************ 2025-05-25 01:02:10.380234 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.380241 | orchestrator | 2025-05-25 01:02:10.380248 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-25 01:02:10.380255 | orchestrator | Sunday 25 May 2025 01:01:55 +0000 (0:00:00.240) 0:00:12.357 ************ 2025-05-25 01:02:10.380262 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.380268 | orchestrator | 2025-05-25 01:02:10.380275 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-25 01:02:10.380282 | orchestrator | Sunday 25 May 2025 01:01:55 +0000 (0:00:00.120) 0:00:12.477 ************ 2025-05-25 01:02:10.380289 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.380296 | orchestrator | 2025-05-25 01:02:10.380303 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-25 01:02:10.380310 | orchestrator | Sunday 25 May 2025 01:01:55 +0000 (0:00:00.120) 0:00:12.598 ************ 2025-05-25 01:02:10.380316 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.380328 | orchestrator | 2025-05-25 01:02:10.380334 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-25 01:02:10.380341 | orchestrator | Sunday 25 May 2025 01:01:56 +0000 (0:00:00.483) 0:00:13.082 ************ 2025-05-25 01:02:10.380348 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.380355 | orchestrator | 2025-05-25 01:02:10.380361 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-25 01:02:10.380368 | orchestrator | Sunday 25 May 2025 01:01:56 +0000 (0:00:00.123) 0:00:13.206 ************ 2025-05-25 01:02:10.380375 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.380381 | orchestrator | 2025-05-25 01:02:10.380388 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-25 01:02:10.380395 | orchestrator | Sunday 25 May 2025 01:01:56 +0000 (0:00:00.457) 0:00:13.664 ************ 2025-05-25 01:02:10.380402 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.380409 | orchestrator | 2025-05-25 01:02:10.380415 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-25 01:02:10.380422 | orchestrator | Sunday 25 May 2025 01:01:56 +0000 (0:00:00.140) 0:00:13.804 ************ 2025-05-25 01:02:10.380428 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.380435 | orchestrator | 2025-05-25 01:02:10.380442 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-25 01:02:10.380449 | orchestrator | Sunday 25 May 2025 01:01:57 +0000 (0:00:00.622) 0:00:14.427 ************ 2025-05-25 01:02:10.380456 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.380463 | orchestrator | 2025-05-25 01:02:10.380469 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-25 01:02:10.380476 | orchestrator | Sunday 25 May 2025 01:01:57 +0000 (0:00:00.141) 0:00:14.568 ************ 2025-05-25 01:02:10.380482 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 01:02:10.380489 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 01:02:10.380496 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 01:02:10.380503 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.380510 | orchestrator | 2025-05-25 01:02:10.380517 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-25 01:02:10.380523 | orchestrator | Sunday 25 May 2025 01:01:58 +0000 (0:00:00.439) 0:00:15.007 ************ 2025-05-25 01:02:10.380530 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 01:02:10.380536 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 01:02:10.380543 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 01:02:10.380551 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.380557 | orchestrator | 2025-05-25 01:02:10.380568 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-25 01:02:10.380575 | orchestrator | Sunday 25 May 2025 01:01:58 +0000 (0:00:00.445) 0:00:15.453 ************ 2025-05-25 01:02:10.380582 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 01:02:10.380588 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-25 01:02:10.380595 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-25 01:02:10.380602 | orchestrator | 2025-05-25 01:02:10.380609 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-25 01:02:10.380616 | orchestrator | Sunday 25 May 2025 01:01:59 +0000 (0:00:01.076) 0:00:16.530 ************ 2025-05-25 01:02:10.380623 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 01:02:10.380629 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 01:02:10.380636 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 01:02:10.380643 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.380650 | orchestrator | 2025-05-25 01:02:10.380656 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-25 01:02:10.380663 | orchestrator | Sunday 25 May 2025 01:01:59 +0000 (0:00:00.191) 0:00:16.721 ************ 2025-05-25 01:02:10.380675 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-25 01:02:10.380682 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-25 01:02:10.380688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-25 01:02:10.380695 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.380702 | orchestrator | 2025-05-25 01:02:10.380709 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-25 01:02:10.380716 | orchestrator | Sunday 25 May 2025 01:01:59 +0000 (0:00:00.207) 0:00:16.929 ************ 2025-05-25 01:02:10.380722 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-25 01:02:10.380729 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-25 01:02:10.380736 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-25 01:02:10.380743 | orchestrator | 2025-05-25 01:02:10.380750 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-25 01:02:10.380760 | orchestrator | Sunday 25 May 2025 01:02:00 +0000 (0:00:00.175) 0:00:17.104 ************ 2025-05-25 01:02:10.380767 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.380774 | orchestrator | 2025-05-25 01:02:10.380781 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-25 01:02:10.380787 | orchestrator | Sunday 25 May 2025 01:02:00 +0000 (0:00:00.121) 0:00:17.226 ************ 2025-05-25 01:02:10.380794 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:02:10.380800 | orchestrator | 2025-05-25 01:02:10.380807 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-25 01:02:10.380814 | orchestrator | Sunday 25 May 2025 01:02:00 +0000 (0:00:00.129) 0:00:17.356 ************ 2025-05-25 01:02:10.380820 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 01:02:10.380827 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 01:02:10.380833 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 01:02:10.380840 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-25 01:02:10.380847 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-25 01:02:10.380854 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-25 01:02:10.380861 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-25 01:02:10.380868 | orchestrator | 2025-05-25 01:02:10.380874 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-25 01:02:10.380881 | orchestrator | Sunday 25 May 2025 01:02:01 +0000 (0:00:01.112) 0:00:18.469 ************ 2025-05-25 01:02:10.380887 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-25 01:02:10.380894 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-25 01:02:10.380901 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-25 01:02:10.380908 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-25 01:02:10.380915 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-25 01:02:10.380922 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-25 01:02:10.380928 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-25 01:02:10.380935 | orchestrator | 2025-05-25 01:02:10.380941 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-25 01:02:10.380948 | orchestrator | Sunday 25 May 2025 01:02:02 +0000 (0:00:01.421) 0:00:19.891 ************ 2025-05-25 01:02:10.380955 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:02:10.380962 | orchestrator | 2025-05-25 01:02:10.380969 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-25 01:02:10.380980 | orchestrator | Sunday 25 May 2025 01:02:03 +0000 (0:00:00.454) 0:00:20.345 ************ 2025-05-25 01:02:10.380987 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 01:02:10.380994 | orchestrator | 2025-05-25 01:02:10.381001 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-25 01:02:10.381008 | orchestrator | Sunday 25 May 2025 01:02:03 +0000 (0:00:00.595) 0:00:20.941 ************ 2025-05-25 01:02:10.381019 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-25 01:02:10.381026 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-25 01:02:10.381032 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-25 01:02:10.381039 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-25 01:02:10.381045 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-25 01:02:10.381052 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-25 01:02:10.381059 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-25 01:02:10.381065 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-25 01:02:10.381090 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-25 01:02:10.381096 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-25 01:02:10.381103 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-25 01:02:10.381109 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-25 01:02:10.381116 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-25 01:02:10.381123 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-25 01:02:10.381130 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-25 01:02:10.381137 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-25 01:02:10.381143 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-25 01:02:10.381149 | orchestrator | 2025-05-25 01:02:10.381243 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 01:02:10.381256 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-25 01:02:10.381264 | orchestrator | 2025-05-25 01:02:10.381270 | orchestrator | 2025-05-25 01:02:10.381277 | orchestrator | 2025-05-25 01:02:10.381283 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 01:02:10.381290 | orchestrator | Sunday 25 May 2025 01:02:09 +0000 (0:00:06.019) 0:00:26.961 ************ 2025-05-25 01:02:10.381297 | orchestrator | =============================================================================== 2025-05-25 01:02:10.381303 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.02s 2025-05-25 01:02:10.381310 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.88s 2025-05-25 01:02:10.381316 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.60s 2025-05-25 01:02:10.381323 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.42s 2025-05-25 01:02:10.381330 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.11s 2025-05-25 01:02:10.381336 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.08s 2025-05-25 01:02:10.381359 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.83s 2025-05-25 01:02:10.381372 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.80s 2025-05-25 01:02:10.381380 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.79s 2025-05-25 01:02:10.381386 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.62s 2025-05-25 01:02:10.381393 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.60s 2025-05-25 01:02:10.381400 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.60s 2025-05-25 01:02:10.381406 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.48s 2025-05-25 01:02:10.381413 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.46s 2025-05-25 01:02:10.381419 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.45s 2025-05-25 01:02:10.381426 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.45s 2025-05-25 01:02:10.381433 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.45s 2025-05-25 01:02:10.381440 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.45s 2025-05-25 01:02:10.381447 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.44s 2025-05-25 01:02:10.381454 | orchestrator | ceph-facts : resolve bluestore_wal_device link(s) ----------------------- 0.32s 2025-05-25 01:02:10.381460 | orchestrator | 2025-05-25 01:02:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:10.381470 | orchestrator | 2025-05-25 01:02:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:10.383713 | orchestrator | 2025-05-25 01:02:10 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:02:10.385451 | orchestrator | 2025-05-25 01:02:10 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:10.385895 | orchestrator | 2025-05-25 01:02:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:13.438290 | orchestrator | 2025-05-25 01:02:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:13.440908 | orchestrator | 2025-05-25 01:02:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:13.444534 | orchestrator | 2025-05-25 01:02:13 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state STARTED 2025-05-25 01:02:13.446578 | orchestrator | 2025-05-25 01:02:13 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:13.447014 | orchestrator | 2025-05-25 01:02:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:16.497528 | orchestrator | 2025-05-25 01:02:16 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:16.498875 | orchestrator | 2025-05-25 01:02:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:16.500045 | orchestrator | 2025-05-25 01:02:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:16.501310 | orchestrator | 2025-05-25 01:02:16 | INFO  | Task 7b1178d4-c5dc-443b-a01e-d98c3e1046cd is in state SUCCESS 2025-05-25 01:02:16.502526 | orchestrator | 2025-05-25 01:02:16 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:16.502673 | orchestrator | 2025-05-25 01:02:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:19.551731 | orchestrator | 2025-05-25 01:02:19 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:19.552302 | orchestrator | 2025-05-25 01:02:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:19.553353 | orchestrator | 2025-05-25 01:02:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:19.554453 | orchestrator | 2025-05-25 01:02:19 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:19.554488 | orchestrator | 2025-05-25 01:02:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:22.597873 | orchestrator | 2025-05-25 01:02:22 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:22.598395 | orchestrator | 2025-05-25 01:02:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:22.599322 | orchestrator | 2025-05-25 01:02:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:22.600184 | orchestrator | 2025-05-25 01:02:22 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:22.600273 | orchestrator | 2025-05-25 01:02:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:25.638705 | orchestrator | 2025-05-25 01:02:25 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:25.639408 | orchestrator | 2025-05-25 01:02:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:25.639673 | orchestrator | 2025-05-25 01:02:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:25.640749 | orchestrator | 2025-05-25 01:02:25 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:25.640773 | orchestrator | 2025-05-25 01:02:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:28.680016 | orchestrator | 2025-05-25 01:02:28 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:28.682379 | orchestrator | 2025-05-25 01:02:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:28.684332 | orchestrator | 2025-05-25 01:02:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:28.686416 | orchestrator | 2025-05-25 01:02:28 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:28.686471 | orchestrator | 2025-05-25 01:02:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:31.727597 | orchestrator | 2025-05-25 01:02:31 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:31.728389 | orchestrator | 2025-05-25 01:02:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:31.729513 | orchestrator | 2025-05-25 01:02:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:31.730952 | orchestrator | 2025-05-25 01:02:31 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:31.730987 | orchestrator | 2025-05-25 01:02:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:34.778307 | orchestrator | 2025-05-25 01:02:34 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:34.779080 | orchestrator | 2025-05-25 01:02:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:34.780142 | orchestrator | 2025-05-25 01:02:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:34.781591 | orchestrator | 2025-05-25 01:02:34 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:34.781630 | orchestrator | 2025-05-25 01:02:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:37.829310 | orchestrator | 2025-05-25 01:02:37 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:37.830192 | orchestrator | 2025-05-25 01:02:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:37.831532 | orchestrator | 2025-05-25 01:02:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:37.832276 | orchestrator | 2025-05-25 01:02:37 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:37.832308 | orchestrator | 2025-05-25 01:02:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:40.871153 | orchestrator | 2025-05-25 01:02:40 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:40.872833 | orchestrator | 2025-05-25 01:02:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:40.873691 | orchestrator | 2025-05-25 01:02:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:40.874220 | orchestrator | 2025-05-25 01:02:40 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:40.874383 | orchestrator | 2025-05-25 01:02:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:43.915416 | orchestrator | 2025-05-25 01:02:43 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:43.917145 | orchestrator | 2025-05-25 01:02:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:43.918646 | orchestrator | 2025-05-25 01:02:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:43.920347 | orchestrator | 2025-05-25 01:02:43 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:43.920369 | orchestrator | 2025-05-25 01:02:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:46.967896 | orchestrator | 2025-05-25 01:02:46 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:46.970153 | orchestrator | 2025-05-25 01:02:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:46.971945 | orchestrator | 2025-05-25 01:02:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:46.973616 | orchestrator | 2025-05-25 01:02:46 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:46.973650 | orchestrator | 2025-05-25 01:02:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:50.025420 | orchestrator | 2025-05-25 01:02:50 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:50.027538 | orchestrator | 2025-05-25 01:02:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:50.029200 | orchestrator | 2025-05-25 01:02:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:50.031742 | orchestrator | 2025-05-25 01:02:50 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:50.031776 | orchestrator | 2025-05-25 01:02:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:53.080662 | orchestrator | 2025-05-25 01:02:53 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:53.081468 | orchestrator | 2025-05-25 01:02:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:53.083460 | orchestrator | 2025-05-25 01:02:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:53.084761 | orchestrator | 2025-05-25 01:02:53 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:53.084934 | orchestrator | 2025-05-25 01:02:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:56.131193 | orchestrator | 2025-05-25 01:02:56 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:56.133558 | orchestrator | 2025-05-25 01:02:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:56.135664 | orchestrator | 2025-05-25 01:02:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:56.137220 | orchestrator | 2025-05-25 01:02:56 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:56.137248 | orchestrator | 2025-05-25 01:02:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:02:59.180003 | orchestrator | 2025-05-25 01:02:59 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:02:59.180163 | orchestrator | 2025-05-25 01:02:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:02:59.180183 | orchestrator | 2025-05-25 01:02:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:02:59.181105 | orchestrator | 2025-05-25 01:02:59 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:02:59.181514 | orchestrator | 2025-05-25 01:02:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:02.230428 | orchestrator | 2025-05-25 01:03:02 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:03:02.232379 | orchestrator | 2025-05-25 01:03:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:02.234751 | orchestrator | 2025-05-25 01:03:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:02.236391 | orchestrator | 2025-05-25 01:03:02 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:03:02.236527 | orchestrator | 2025-05-25 01:03:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:05.278228 | orchestrator | 2025-05-25 01:03:05 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:03:05.278400 | orchestrator | 2025-05-25 01:03:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:05.279249 | orchestrator | 2025-05-25 01:03:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:05.280328 | orchestrator | 2025-05-25 01:03:05 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:03:05.280355 | orchestrator | 2025-05-25 01:03:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:08.328289 | orchestrator | 2025-05-25 01:03:08 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:03:08.328894 | orchestrator | 2025-05-25 01:03:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:08.328924 | orchestrator | 2025-05-25 01:03:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:08.329177 | orchestrator | 2025-05-25 01:03:08 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:03:08.329375 | orchestrator | 2025-05-25 01:03:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:11.380842 | orchestrator | 2025-05-25 01:03:11 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state STARTED 2025-05-25 01:03:11.382011 | orchestrator | 2025-05-25 01:03:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:11.384583 | orchestrator | 2025-05-25 01:03:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:11.385850 | orchestrator | 2025-05-25 01:03:11 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:03:11.386213 | orchestrator | 2025-05-25 01:03:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:14.435959 | orchestrator | 2025-05-25 01:03:14 | INFO  | Task f97aedfd-ce49-4418-8a41-7a9a728c3c78 is in state STARTED 2025-05-25 01:03:14.437412 | orchestrator | 2025-05-25 01:03:14 | INFO  | Task cad1f571-3189-44d4-b870-eefd83ef7890 is in state SUCCESS 2025-05-25 01:03:14.438310 | orchestrator | 2025-05-25 01:03:14.438354 | orchestrator | 2025-05-25 01:03:14.438369 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-25 01:03:14.438383 | orchestrator | 2025-05-25 01:03:14.438395 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-25 01:03:14.438406 | orchestrator | Sunday 25 May 2025 01:01:34 +0000 (0:00:00.149) 0:00:00.149 ************ 2025-05-25 01:03:14.438417 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-25 01:03:14.438428 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-25 01:03:14.438439 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-25 01:03:14.438450 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-25 01:03:14.438460 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-25 01:03:14.438471 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-25 01:03:14.438482 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-25 01:03:14.438493 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-25 01:03:14.438504 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-25 01:03:14.438515 | orchestrator | 2025-05-25 01:03:14.438525 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-25 01:03:14.438536 | orchestrator | Sunday 25 May 2025 01:01:37 +0000 (0:00:02.884) 0:00:03.033 ************ 2025-05-25 01:03:14.438547 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-25 01:03:14.438558 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-25 01:03:14.438569 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-25 01:03:14.438580 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-25 01:03:14.438590 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-25 01:03:14.438601 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-25 01:03:14.438612 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-25 01:03:14.438622 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-25 01:03:14.438634 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-25 01:03:14.438645 | orchestrator | 2025-05-25 01:03:14.438672 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-25 01:03:14.438684 | orchestrator | Sunday 25 May 2025 01:01:37 +0000 (0:00:00.260) 0:00:03.294 ************ 2025-05-25 01:03:14.438695 | orchestrator | ok: [testbed-manager] => { 2025-05-25 01:03:14.438708 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-25 01:03:14.438721 | orchestrator | } 2025-05-25 01:03:14.438733 | orchestrator | 2025-05-25 01:03:14.438744 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-25 01:03:14.438755 | orchestrator | Sunday 25 May 2025 01:01:38 +0000 (0:00:00.172) 0:00:03.466 ************ 2025-05-25 01:03:14.438788 | orchestrator | changed: [testbed-manager] 2025-05-25 01:03:14.438799 | orchestrator | 2025-05-25 01:03:14.438810 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-25 01:03:14.438821 | orchestrator | Sunday 25 May 2025 01:02:10 +0000 (0:00:32.533) 0:00:36.000 ************ 2025-05-25 01:03:14.438833 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-25 01:03:14.438844 | orchestrator | 2025-05-25 01:03:14.438855 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-25 01:03:14.438866 | orchestrator | Sunday 25 May 2025 01:02:11 +0000 (0:00:00.430) 0:00:36.430 ************ 2025-05-25 01:03:14.438877 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-25 01:03:14.438890 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-25 01:03:14.438901 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-25 01:03:14.438912 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-25 01:03:14.438923 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-25 01:03:14.438947 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-25 01:03:14.438959 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-25 01:03:14.438970 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-25 01:03:14.438981 | orchestrator | 2025-05-25 01:03:14.438991 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-25 01:03:14.439002 | orchestrator | Sunday 25 May 2025 01:02:13 +0000 (0:00:02.814) 0:00:39.245 ************ 2025-05-25 01:03:14.439013 | orchestrator | skipping: [testbed-manager] 2025-05-25 01:03:14.439024 | orchestrator | 2025-05-25 01:03:14.439034 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 01:03:14.439046 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 01:03:14.439057 | orchestrator | 2025-05-25 01:03:14.439107 | orchestrator | Sunday 25 May 2025 01:02:13 +0000 (0:00:00.033) 0:00:39.278 ************ 2025-05-25 01:03:14.439121 | orchestrator | =============================================================================== 2025-05-25 01:03:14.439151 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 32.53s 2025-05-25 01:03:14.439162 | orchestrator | Check ceph keys --------------------------------------------------------- 2.88s 2025-05-25 01:03:14.439173 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.81s 2025-05-25 01:03:14.439184 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.43s 2025-05-25 01:03:14.439194 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.26s 2025-05-25 01:03:14.439205 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.17s 2025-05-25 01:03:14.439225 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-05-25 01:03:14.439236 | orchestrator | 2025-05-25 01:03:14.439318 | orchestrator | 2025-05-25 01:03:14 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:14.440587 | orchestrator | 2025-05-25 01:03:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:14.441847 | orchestrator | 2025-05-25 01:03:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:14.443256 | orchestrator | 2025-05-25 01:03:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:14.444410 | orchestrator | 2025-05-25 01:03:14 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:03:14.444431 | orchestrator | 2025-05-25 01:03:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:17.489875 | orchestrator | 2025-05-25 01:03:17 | INFO  | Task f97aedfd-ce49-4418-8a41-7a9a728c3c78 is in state STARTED 2025-05-25 01:03:17.491580 | orchestrator | 2025-05-25 01:03:17 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:17.494163 | orchestrator | 2025-05-25 01:03:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:17.495859 | orchestrator | 2025-05-25 01:03:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:17.497502 | orchestrator | 2025-05-25 01:03:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:17.498898 | orchestrator | 2025-05-25 01:03:17 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:03:17.498926 | orchestrator | 2025-05-25 01:03:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:20.534882 | orchestrator | 2025-05-25 01:03:20 | INFO  | Task f97aedfd-ce49-4418-8a41-7a9a728c3c78 is in state STARTED 2025-05-25 01:03:20.535277 | orchestrator | 2025-05-25 01:03:20 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:20.536229 | orchestrator | 2025-05-25 01:03:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:20.536928 | orchestrator | 2025-05-25 01:03:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:20.537884 | orchestrator | 2025-05-25 01:03:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:20.538804 | orchestrator | 2025-05-25 01:03:20 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:03:20.538830 | orchestrator | 2025-05-25 01:03:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:23.587497 | orchestrator | 2025-05-25 01:03:23 | INFO  | Task f97aedfd-ce49-4418-8a41-7a9a728c3c78 is in state STARTED 2025-05-25 01:03:23.589593 | orchestrator | 2025-05-25 01:03:23 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:23.590853 | orchestrator | 2025-05-25 01:03:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:23.592573 | orchestrator | 2025-05-25 01:03:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:23.593924 | orchestrator | 2025-05-25 01:03:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:23.595384 | orchestrator | 2025-05-25 01:03:23 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:03:23.595410 | orchestrator | 2025-05-25 01:03:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:26.649477 | orchestrator | 2025-05-25 01:03:26 | INFO  | Task f97aedfd-ce49-4418-8a41-7a9a728c3c78 is in state STARTED 2025-05-25 01:03:26.650949 | orchestrator | 2025-05-25 01:03:26 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:26.652713 | orchestrator | 2025-05-25 01:03:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:26.655022 | orchestrator | 2025-05-25 01:03:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:26.656456 | orchestrator | 2025-05-25 01:03:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:26.658356 | orchestrator | 2025-05-25 01:03:26 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:03:26.658424 | orchestrator | 2025-05-25 01:03:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:29.700202 | orchestrator | 2025-05-25 01:03:29 | INFO  | Task f97aedfd-ce49-4418-8a41-7a9a728c3c78 is in state STARTED 2025-05-25 01:03:29.701071 | orchestrator | 2025-05-25 01:03:29 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:29.702820 | orchestrator | 2025-05-25 01:03:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:29.704333 | orchestrator | 2025-05-25 01:03:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:29.707850 | orchestrator | 2025-05-25 01:03:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:29.712965 | orchestrator | 2025-05-25 01:03:29 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:03:29.713022 | orchestrator | 2025-05-25 01:03:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:32.752617 | orchestrator | 2025-05-25 01:03:32 | INFO  | Task f97aedfd-ce49-4418-8a41-7a9a728c3c78 is in state STARTED 2025-05-25 01:03:32.753821 | orchestrator | 2025-05-25 01:03:32 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:32.755783 | orchestrator | 2025-05-25 01:03:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:32.757618 | orchestrator | 2025-05-25 01:03:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:32.759437 | orchestrator | 2025-05-25 01:03:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:32.761073 | orchestrator | 2025-05-25 01:03:32 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state STARTED 2025-05-25 01:03:32.761099 | orchestrator | 2025-05-25 01:03:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:35.799863 | orchestrator | 2025-05-25 01:03:35 | INFO  | Task f97aedfd-ce49-4418-8a41-7a9a728c3c78 is in state STARTED 2025-05-25 01:03:35.799947 | orchestrator | 2025-05-25 01:03:35 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:35.801243 | orchestrator | 2025-05-25 01:03:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:35.802797 | orchestrator | 2025-05-25 01:03:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:35.804206 | orchestrator | 2025-05-25 01:03:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:35.805405 | orchestrator | 2025-05-25 01:03:35 | INFO  | Task 404e46ba-7b4f-4eaf-8d04-3d6edbee6ee6 is in state SUCCESS 2025-05-25 01:03:35.805651 | orchestrator | 2025-05-25 01:03:35.805673 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-25 01:03:35.805731 | orchestrator | 2025-05-25 01:03:35.805740 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-25 01:03:35.805747 | orchestrator | Sunday 25 May 2025 01:02:17 +0000 (0:00:00.164) 0:00:00.165 ************ 2025-05-25 01:03:35.805754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-25 01:03:35.805763 | orchestrator | 2025-05-25 01:03:35.805769 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-25 01:03:35.805776 | orchestrator | Sunday 25 May 2025 01:02:17 +0000 (0:00:00.212) 0:00:00.377 ************ 2025-05-25 01:03:35.805784 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-25 01:03:35.805791 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-25 01:03:35.805800 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-25 01:03:35.805807 | orchestrator | 2025-05-25 01:03:35.805813 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-25 01:03:35.805820 | orchestrator | Sunday 25 May 2025 01:02:18 +0000 (0:00:01.216) 0:00:01.593 ************ 2025-05-25 01:03:35.805827 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-25 01:03:35.805834 | orchestrator | 2025-05-25 01:03:35.805841 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-25 01:03:35.805847 | orchestrator | Sunday 25 May 2025 01:02:19 +0000 (0:00:01.114) 0:00:02.707 ************ 2025-05-25 01:03:35.805855 | orchestrator | changed: [testbed-manager] 2025-05-25 01:03:35.805862 | orchestrator | 2025-05-25 01:03:35.805869 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-25 01:03:35.805876 | orchestrator | Sunday 25 May 2025 01:02:20 +0000 (0:00:00.874) 0:00:03.582 ************ 2025-05-25 01:03:35.805882 | orchestrator | changed: [testbed-manager] 2025-05-25 01:03:35.805889 | orchestrator | 2025-05-25 01:03:35.805896 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-25 01:03:35.805903 | orchestrator | Sunday 25 May 2025 01:02:21 +0000 (0:00:00.993) 0:00:04.576 ************ 2025-05-25 01:03:35.805910 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-25 01:03:35.805917 | orchestrator | ok: [testbed-manager] 2025-05-25 01:03:35.805924 | orchestrator | 2025-05-25 01:03:35.805931 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-25 01:03:35.805938 | orchestrator | Sunday 25 May 2025 01:03:02 +0000 (0:00:40.827) 0:00:45.403 ************ 2025-05-25 01:03:35.805945 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-25 01:03:35.805952 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-25 01:03:35.805959 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-25 01:03:35.805965 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-25 01:03:35.805972 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-25 01:03:35.805978 | orchestrator | 2025-05-25 01:03:35.805999 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-25 01:03:35.806007 | orchestrator | Sunday 25 May 2025 01:03:06 +0000 (0:00:03.905) 0:00:49.309 ************ 2025-05-25 01:03:35.806041 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-25 01:03:35.806050 | orchestrator | 2025-05-25 01:03:35.806057 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-25 01:03:35.806063 | orchestrator | Sunday 25 May 2025 01:03:06 +0000 (0:00:00.444) 0:00:49.754 ************ 2025-05-25 01:03:35.806070 | orchestrator | skipping: [testbed-manager] 2025-05-25 01:03:35.806077 | orchestrator | 2025-05-25 01:03:35.806084 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-25 01:03:35.806090 | orchestrator | Sunday 25 May 2025 01:03:06 +0000 (0:00:00.119) 0:00:49.873 ************ 2025-05-25 01:03:35.806097 | orchestrator | skipping: [testbed-manager] 2025-05-25 01:03:35.806111 | orchestrator | 2025-05-25 01:03:35.806118 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-25 01:03:35.806125 | orchestrator | Sunday 25 May 2025 01:03:07 +0000 (0:00:00.316) 0:00:50.190 ************ 2025-05-25 01:03:35.806132 | orchestrator | changed: [testbed-manager] 2025-05-25 01:03:35.806166 | orchestrator | 2025-05-25 01:03:35.806174 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-25 01:03:35.806181 | orchestrator | Sunday 25 May 2025 01:03:08 +0000 (0:00:01.549) 0:00:51.739 ************ 2025-05-25 01:03:35.806187 | orchestrator | changed: [testbed-manager] 2025-05-25 01:03:35.806193 | orchestrator | 2025-05-25 01:03:35.806202 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-25 01:03:35.806208 | orchestrator | Sunday 25 May 2025 01:03:09 +0000 (0:00:00.833) 0:00:52.573 ************ 2025-05-25 01:03:35.806213 | orchestrator | changed: [testbed-manager] 2025-05-25 01:03:35.806219 | orchestrator | 2025-05-25 01:03:35.806226 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-25 01:03:35.806232 | orchestrator | Sunday 25 May 2025 01:03:10 +0000 (0:00:00.547) 0:00:53.120 ************ 2025-05-25 01:03:35.806239 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-25 01:03:35.806246 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-25 01:03:35.806252 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-25 01:03:35.806259 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-25 01:03:35.806275 | orchestrator | 2025-05-25 01:03:35.806281 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 01:03:35.806288 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-25 01:03:35.806295 | orchestrator | 2025-05-25 01:03:35.806313 | orchestrator | Sunday 25 May 2025 01:03:11 +0000 (0:00:01.435) 0:00:54.556 ************ 2025-05-25 01:03:35.806320 | orchestrator | =============================================================================== 2025-05-25 01:03:35.806327 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.83s 2025-05-25 01:03:35.806334 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.91s 2025-05-25 01:03:35.806340 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.55s 2025-05-25 01:03:35.806347 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.44s 2025-05-25 01:03:35.806353 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2025-05-25 01:03:35.806360 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.11s 2025-05-25 01:03:35.806367 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.99s 2025-05-25 01:03:35.806374 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.87s 2025-05-25 01:03:35.806381 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.83s 2025-05-25 01:03:35.806386 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.55s 2025-05-25 01:03:35.806392 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.44s 2025-05-25 01:03:35.806399 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.32s 2025-05-25 01:03:35.806405 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-05-25 01:03:35.806412 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-05-25 01:03:35.806419 | orchestrator | 2025-05-25 01:03:35.806480 | orchestrator | 2025-05-25 01:03:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:38.841673 | orchestrator | 2025-05-25 01:03:38 | INFO  | Task f97aedfd-ce49-4418-8a41-7a9a728c3c78 is in state STARTED 2025-05-25 01:03:38.845617 | orchestrator | 2025-05-25 01:03:38 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:38.847084 | orchestrator | 2025-05-25 01:03:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:38.850516 | orchestrator | 2025-05-25 01:03:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:38.851944 | orchestrator | 2025-05-25 01:03:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:38.851977 | orchestrator | 2025-05-25 01:03:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:41.893407 | orchestrator | 2025-05-25 01:03:41 | INFO  | Task f97aedfd-ce49-4418-8a41-7a9a728c3c78 is in state STARTED 2025-05-25 01:03:41.894500 | orchestrator | 2025-05-25 01:03:41 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:41.898614 | orchestrator | 2025-05-25 01:03:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:41.900270 | orchestrator | 2025-05-25 01:03:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:41.902408 | orchestrator | 2025-05-25 01:03:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:41.902772 | orchestrator | 2025-05-25 01:03:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:44.950778 | orchestrator | 2025-05-25 01:03:44 | INFO  | Task f97aedfd-ce49-4418-8a41-7a9a728c3c78 is in state STARTED 2025-05-25 01:03:44.951309 | orchestrator | 2025-05-25 01:03:44 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:44.952606 | orchestrator | 2025-05-25 01:03:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:44.953511 | orchestrator | 2025-05-25 01:03:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:44.954761 | orchestrator | 2025-05-25 01:03:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:44.954921 | orchestrator | 2025-05-25 01:03:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:47.979778 | orchestrator | 2025-05-25 01:03:47 | INFO  | Task f97aedfd-ce49-4418-8a41-7a9a728c3c78 is in state SUCCESS 2025-05-25 01:03:47.980448 | orchestrator | 2025-05-25 01:03:47 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:47.980666 | orchestrator | 2025-05-25 01:03:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:47.981393 | orchestrator | 2025-05-25 01:03:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:47.981828 | orchestrator | 2025-05-25 01:03:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:47.981958 | orchestrator | 2025-05-25 01:03:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:51.006465 | orchestrator | 2025-05-25 01:03:51 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:51.006819 | orchestrator | 2025-05-25 01:03:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:51.006852 | orchestrator | 2025-05-25 01:03:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:51.007309 | orchestrator | 2025-05-25 01:03:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:51.008448 | orchestrator | 2025-05-25 01:03:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:54.037112 | orchestrator | 2025-05-25 01:03:54 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:54.037437 | orchestrator | 2025-05-25 01:03:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:54.038673 | orchestrator | 2025-05-25 01:03:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:54.039384 | orchestrator | 2025-05-25 01:03:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:54.039761 | orchestrator | 2025-05-25 01:03:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:03:57.062097 | orchestrator | 2025-05-25 01:03:57 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:03:57.062429 | orchestrator | 2025-05-25 01:03:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:03:57.063577 | orchestrator | 2025-05-25 01:03:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:03:57.063614 | orchestrator | 2025-05-25 01:03:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:03:57.063627 | orchestrator | 2025-05-25 01:03:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:00.113535 | orchestrator | 2025-05-25 01:04:00 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:00.114326 | orchestrator | 2025-05-25 01:04:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:00.116099 | orchestrator | 2025-05-25 01:04:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:00.117863 | orchestrator | 2025-05-25 01:04:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:00.117947 | orchestrator | 2025-05-25 01:04:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:03.169606 | orchestrator | 2025-05-25 01:04:03 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:03.172211 | orchestrator | 2025-05-25 01:04:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:03.173969 | orchestrator | 2025-05-25 01:04:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:03.175862 | orchestrator | 2025-05-25 01:04:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:03.176378 | orchestrator | 2025-05-25 01:04:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:06.231855 | orchestrator | 2025-05-25 01:04:06 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:06.232677 | orchestrator | 2025-05-25 01:04:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:06.232970 | orchestrator | 2025-05-25 01:04:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:06.233564 | orchestrator | 2025-05-25 01:04:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:06.233591 | orchestrator | 2025-05-25 01:04:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:09.284020 | orchestrator | 2025-05-25 01:04:09 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:09.286581 | orchestrator | 2025-05-25 01:04:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:09.287972 | orchestrator | 2025-05-25 01:04:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:09.289946 | orchestrator | 2025-05-25 01:04:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:09.289973 | orchestrator | 2025-05-25 01:04:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:12.329819 | orchestrator | 2025-05-25 01:04:12 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:12.331536 | orchestrator | 2025-05-25 01:04:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:12.333796 | orchestrator | 2025-05-25 01:04:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:12.335708 | orchestrator | 2025-05-25 01:04:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:12.336565 | orchestrator | 2025-05-25 01:04:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:15.381395 | orchestrator | 2025-05-25 01:04:15 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:15.381787 | orchestrator | 2025-05-25 01:04:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:15.387052 | orchestrator | 2025-05-25 01:04:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:15.388040 | orchestrator | 2025-05-25 01:04:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:15.388379 | orchestrator | 2025-05-25 01:04:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:18.422791 | orchestrator | 2025-05-25 01:04:18 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:18.424422 | orchestrator | 2025-05-25 01:04:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:18.426770 | orchestrator | 2025-05-25 01:04:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:18.427500 | orchestrator | 2025-05-25 01:04:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:18.427710 | orchestrator | 2025-05-25 01:04:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:21.478562 | orchestrator | 2025-05-25 01:04:21 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:21.480785 | orchestrator | 2025-05-25 01:04:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:21.485534 | orchestrator | 2025-05-25 01:04:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:21.489568 | orchestrator | 2025-05-25 01:04:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:21.489800 | orchestrator | 2025-05-25 01:04:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:24.534388 | orchestrator | 2025-05-25 01:04:24 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:24.535559 | orchestrator | 2025-05-25 01:04:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:24.537562 | orchestrator | 2025-05-25 01:04:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:24.539384 | orchestrator | 2025-05-25 01:04:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:24.539781 | orchestrator | 2025-05-25 01:04:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:27.594374 | orchestrator | 2025-05-25 01:04:27 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:27.596256 | orchestrator | 2025-05-25 01:04:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:27.598598 | orchestrator | 2025-05-25 01:04:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:27.600375 | orchestrator | 2025-05-25 01:04:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:27.600561 | orchestrator | 2025-05-25 01:04:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:30.636358 | orchestrator | 2025-05-25 01:04:30 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:30.637161 | orchestrator | 2025-05-25 01:04:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:30.637520 | orchestrator | 2025-05-25 01:04:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:30.638098 | orchestrator | 2025-05-25 01:04:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:30.638355 | orchestrator | 2025-05-25 01:04:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:33.701047 | orchestrator | 2025-05-25 01:04:33 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:33.701264 | orchestrator | 2025-05-25 01:04:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:33.701289 | orchestrator | 2025-05-25 01:04:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:33.702014 | orchestrator | 2025-05-25 01:04:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:33.702102 | orchestrator | 2025-05-25 01:04:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:36.761226 | orchestrator | 2025-05-25 01:04:36 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:36.762914 | orchestrator | 2025-05-25 01:04:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:36.763649 | orchestrator | 2025-05-25 01:04:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:36.765927 | orchestrator | 2025-05-25 01:04:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:36.766338 | orchestrator | 2025-05-25 01:04:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:39.811519 | orchestrator | 2025-05-25 01:04:39 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:39.812834 | orchestrator | 2025-05-25 01:04:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:39.814152 | orchestrator | 2025-05-25 01:04:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:39.814918 | orchestrator | 2025-05-25 01:04:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:39.815014 | orchestrator | 2025-05-25 01:04:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:42.860439 | orchestrator | 2025-05-25 01:04:42 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:42.861453 | orchestrator | 2025-05-25 01:04:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:42.865641 | orchestrator | 2025-05-25 01:04:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:42.868226 | orchestrator | 2025-05-25 01:04:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:42.868593 | orchestrator | 2025-05-25 01:04:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:45.909034 | orchestrator | 2025-05-25 01:04:45 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:45.909252 | orchestrator | 2025-05-25 01:04:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:45.909861 | orchestrator | 2025-05-25 01:04:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:45.910739 | orchestrator | 2025-05-25 01:04:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:45.910766 | orchestrator | 2025-05-25 01:04:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:48.969715 | orchestrator | 2025-05-25 01:04:48 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:48.971527 | orchestrator | 2025-05-25 01:04:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:48.973221 | orchestrator | 2025-05-25 01:04:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:48.974682 | orchestrator | 2025-05-25 01:04:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:48.974922 | orchestrator | 2025-05-25 01:04:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:52.022800 | orchestrator | 2025-05-25 01:04:52 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:52.023171 | orchestrator | 2025-05-25 01:04:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:52.028630 | orchestrator | 2025-05-25 01:04:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:52.028673 | orchestrator | 2025-05-25 01:04:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:52.028686 | orchestrator | 2025-05-25 01:04:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:55.077565 | orchestrator | 2025-05-25 01:04:55 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:55.083996 | orchestrator | 2025-05-25 01:04:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:55.088648 | orchestrator | 2025-05-25 01:04:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:55.089368 | orchestrator | 2025-05-25 01:04:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:55.089444 | orchestrator | 2025-05-25 01:04:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:04:58.138781 | orchestrator | 2025-05-25 01:04:58 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:04:58.141616 | orchestrator | 2025-05-25 01:04:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:04:58.141656 | orchestrator | 2025-05-25 01:04:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:04:58.143051 | orchestrator | 2025-05-25 01:04:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:04:58.143081 | orchestrator | 2025-05-25 01:04:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:01.214894 | orchestrator | 2025-05-25 01:05:01 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:05:01.216176 | orchestrator | 2025-05-25 01:05:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:01.217671 | orchestrator | 2025-05-25 01:05:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:01.219522 | orchestrator | 2025-05-25 01:05:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:01.219548 | orchestrator | 2025-05-25 01:05:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:04.270537 | orchestrator | 2025-05-25 01:05:04 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:05:04.272238 | orchestrator | 2025-05-25 01:05:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:04.273835 | orchestrator | 2025-05-25 01:05:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:04.275285 | orchestrator | 2025-05-25 01:05:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:04.275363 | orchestrator | 2025-05-25 01:05:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:07.326953 | orchestrator | 2025-05-25 01:05:07 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:05:07.328673 | orchestrator | 2025-05-25 01:05:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:07.329928 | orchestrator | 2025-05-25 01:05:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:07.331574 | orchestrator | 2025-05-25 01:05:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:07.331599 | orchestrator | 2025-05-25 01:05:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:10.382370 | orchestrator | 2025-05-25 01:05:10 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:05:10.384418 | orchestrator | 2025-05-25 01:05:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:10.386362 | orchestrator | 2025-05-25 01:05:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:10.388453 | orchestrator | 2025-05-25 01:05:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:10.388490 | orchestrator | 2025-05-25 01:05:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:13.433257 | orchestrator | 2025-05-25 01:05:13 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state STARTED 2025-05-25 01:05:13.434253 | orchestrator | 2025-05-25 01:05:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:13.435445 | orchestrator | 2025-05-25 01:05:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:13.436729 | orchestrator | 2025-05-25 01:05:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:13.436757 | orchestrator | 2025-05-25 01:05:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:16.482891 | orchestrator | 2025-05-25 01:05:16 | INFO  | Task b324e5a1-6aea-4055-bb72-e098126d367a is in state SUCCESS 2025-05-25 01:05:16.483977 | orchestrator | 2025-05-25 01:05:16.484023 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-25 01:05:16.484037 | orchestrator | 2025-05-25 01:05:16.484048 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-25 01:05:16.484060 | orchestrator | Sunday 25 May 2025 00:57:17 +0000 (0:00:00.192) 0:00:00.192 ************ 2025-05-25 01:05:16.484071 | orchestrator | changed: [localhost] 2025-05-25 01:05:16.484083 | orchestrator | 2025-05-25 01:05:16.484094 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-25 01:05:16.484105 | orchestrator | Sunday 25 May 2025 00:57:17 +0000 (0:00:00.659) 0:00:00.852 ************ 2025-05-25 01:05:16.484116 | orchestrator | 2025-05-25 01:05:16.484127 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-25 01:05:16.484138 | orchestrator | 2025-05-25 01:05:16.484149 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-25 01:05:16.484160 | orchestrator | 2025-05-25 01:05:16.484171 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-25 01:05:16.484244 | orchestrator | 2025-05-25 01:05:16.484259 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-25 01:05:16.484307 | orchestrator | 2025-05-25 01:05:16.484319 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-25 01:05:16.484330 | orchestrator | 2025-05-25 01:05:16.484370 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-25 01:05:16.484381 | orchestrator | 2025-05-25 01:05:16.484392 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-25 01:05:16.484403 | orchestrator | 2025-05-25 01:05:16.484614 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-25 01:05:16.484635 | orchestrator | changed: [localhost] 2025-05-25 01:05:16.484653 | orchestrator | 2025-05-25 01:05:16.484671 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-25 01:05:16.484691 | orchestrator | Sunday 25 May 2025 01:03:21 +0000 (0:06:03.800) 0:06:04.652 ************ 2025-05-25 01:05:16.484708 | orchestrator | changed: [localhost] 2025-05-25 01:05:16.484728 | orchestrator | 2025-05-25 01:05:16.484748 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 01:05:16.484767 | orchestrator | 2025-05-25 01:05:16.484784 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 01:05:16.484797 | orchestrator | Sunday 25 May 2025 01:03:34 +0000 (0:00:12.989) 0:06:17.642 ************ 2025-05-25 01:05:16.484808 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:05:16.484819 | orchestrator | ok: [testbed-node-1] 2025-05-25 01:05:16.484830 | orchestrator | ok: [testbed-node-2] 2025-05-25 01:05:16.484841 | orchestrator | 2025-05-25 01:05:16.484851 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 01:05:16.484862 | orchestrator | Sunday 25 May 2025 01:03:34 +0000 (0:00:00.431) 0:06:18.073 ************ 2025-05-25 01:05:16.484873 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-25 01:05:16.484884 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-25 01:05:16.484895 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-25 01:05:16.484906 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-25 01:05:16.484917 | orchestrator | 2025-05-25 01:05:16.484927 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-25 01:05:16.484938 | orchestrator | skipping: no hosts matched 2025-05-25 01:05:16.484950 | orchestrator | 2025-05-25 01:05:16.484960 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 01:05:16.484987 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 01:05:16.485001 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 01:05:16.485014 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 01:05:16.485025 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 01:05:16.485085 | orchestrator | 2025-05-25 01:05:16.485098 | orchestrator | 2025-05-25 01:05:16.485109 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 01:05:16.485120 | orchestrator | Sunday 25 May 2025 01:03:35 +0000 (0:00:00.429) 0:06:18.502 ************ 2025-05-25 01:05:16.485131 | orchestrator | =============================================================================== 2025-05-25 01:05:16.485142 | orchestrator | Download ironic-agent initramfs --------------------------------------- 363.80s 2025-05-25 01:05:16.485153 | orchestrator | Download ironic-agent kernel ------------------------------------------- 12.99s 2025-05-25 01:05:16.485303 | orchestrator | Ensure the destination directory exists --------------------------------- 0.66s 2025-05-25 01:05:16.485331 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-05-25 01:05:16.485341 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-05-25 01:05:16.485352 | orchestrator | 2025-05-25 01:05:16.485363 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-25 01:05:16.485374 | orchestrator | 2025-05-25 01:05:16.485385 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-25 01:05:16.485396 | orchestrator | 2025-05-25 01:05:16.485407 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-25 01:05:16.485418 | orchestrator | Sunday 25 May 2025 01:03:15 +0000 (0:00:00.414) 0:00:00.414 ************ 2025-05-25 01:05:16.485428 | orchestrator | changed: [testbed-manager] 2025-05-25 01:05:16.485534 | orchestrator | 2025-05-25 01:05:16.485547 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-25 01:05:16.485558 | orchestrator | Sunday 25 May 2025 01:03:16 +0000 (0:00:01.438) 0:00:01.852 ************ 2025-05-25 01:05:16.485569 | orchestrator | changed: [testbed-manager] 2025-05-25 01:05:16.485580 | orchestrator | 2025-05-25 01:05:16.485606 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-25 01:05:16.485618 | orchestrator | Sunday 25 May 2025 01:03:17 +0000 (0:00:01.141) 0:00:02.994 ************ 2025-05-25 01:05:16.485629 | orchestrator | changed: [testbed-manager] 2025-05-25 01:05:16.485639 | orchestrator | 2025-05-25 01:05:16.485650 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-25 01:05:16.485661 | orchestrator | Sunday 25 May 2025 01:03:18 +0000 (0:00:00.982) 0:00:03.976 ************ 2025-05-25 01:05:16.485672 | orchestrator | changed: [testbed-manager] 2025-05-25 01:05:16.485683 | orchestrator | 2025-05-25 01:05:16.485693 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-25 01:05:16.485704 | orchestrator | Sunday 25 May 2025 01:03:20 +0000 (0:00:01.065) 0:00:05.042 ************ 2025-05-25 01:05:16.485715 | orchestrator | changed: [testbed-manager] 2025-05-25 01:05:16.485726 | orchestrator | 2025-05-25 01:05:16.485743 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-25 01:05:16.485761 | orchestrator | Sunday 25 May 2025 01:03:21 +0000 (0:00:01.244) 0:00:06.286 ************ 2025-05-25 01:05:16.485779 | orchestrator | changed: [testbed-manager] 2025-05-25 01:05:16.485796 | orchestrator | 2025-05-25 01:05:16.485814 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-25 01:05:16.485832 | orchestrator | Sunday 25 May 2025 01:03:22 +0000 (0:00:01.107) 0:00:07.394 ************ 2025-05-25 01:05:16.485849 | orchestrator | changed: [testbed-manager] 2025-05-25 01:05:16.485869 | orchestrator | 2025-05-25 01:05:16.485888 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-25 01:05:16.486009 | orchestrator | Sunday 25 May 2025 01:03:23 +0000 (0:00:01.221) 0:00:08.616 ************ 2025-05-25 01:05:16.486087 | orchestrator | changed: [testbed-manager] 2025-05-25 01:05:16.486099 | orchestrator | 2025-05-25 01:05:16.486110 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-25 01:05:16.486121 | orchestrator | Sunday 25 May 2025 01:03:24 +0000 (0:00:01.250) 0:00:09.866 ************ 2025-05-25 01:05:16.486132 | orchestrator | changed: [testbed-manager] 2025-05-25 01:05:16.486143 | orchestrator | 2025-05-25 01:05:16.486286 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-25 01:05:16.486299 | orchestrator | Sunday 25 May 2025 01:03:41 +0000 (0:00:16.467) 0:00:26.334 ************ 2025-05-25 01:05:16.486310 | orchestrator | skipping: [testbed-manager] 2025-05-25 01:05:16.486321 | orchestrator | 2025-05-25 01:05:16.486332 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-25 01:05:16.486343 | orchestrator | 2025-05-25 01:05:16.486353 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-25 01:05:16.486431 | orchestrator | Sunday 25 May 2025 01:03:41 +0000 (0:00:00.645) 0:00:26.980 ************ 2025-05-25 01:05:16.486454 | orchestrator | changed: [testbed-node-0] 2025-05-25 01:05:16.486465 | orchestrator | 2025-05-25 01:05:16.486476 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-25 01:05:16.486504 | orchestrator | 2025-05-25 01:05:16.486526 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-25 01:05:16.486571 | orchestrator | Sunday 25 May 2025 01:03:44 +0000 (0:00:02.196) 0:00:29.176 ************ 2025-05-25 01:05:16.486585 | orchestrator | changed: [testbed-node-1] 2025-05-25 01:05:16.486596 | orchestrator | 2025-05-25 01:05:16.486607 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-25 01:05:16.486618 | orchestrator | 2025-05-25 01:05:16.486629 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-25 01:05:16.486648 | orchestrator | Sunday 25 May 2025 01:03:46 +0000 (0:00:01.895) 0:00:31.072 ************ 2025-05-25 01:05:16.486659 | orchestrator | changed: [testbed-node-2] 2025-05-25 01:05:16.486670 | orchestrator | 2025-05-25 01:05:16.486681 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 01:05:16.486692 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-25 01:05:16.486704 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 01:05:16.486715 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 01:05:16.486726 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-25 01:05:16.486737 | orchestrator | 2025-05-25 01:05:16.486748 | orchestrator | 2025-05-25 01:05:16.486759 | orchestrator | 2025-05-25 01:05:16.486770 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 01:05:16.486780 | orchestrator | Sunday 25 May 2025 01:03:47 +0000 (0:00:01.335) 0:00:32.407 ************ 2025-05-25 01:05:16.486791 | orchestrator | =============================================================================== 2025-05-25 01:05:16.486802 | orchestrator | Create admin user ------------------------------------------------------ 16.47s 2025-05-25 01:05:16.486812 | orchestrator | Restart ceph manager service -------------------------------------------- 5.43s 2025-05-25 01:05:16.486823 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.44s 2025-05-25 01:05:16.486834 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.25s 2025-05-25 01:05:16.486845 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.24s 2025-05-25 01:05:16.486855 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.22s 2025-05-25 01:05:16.486866 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.14s 2025-05-25 01:05:16.486881 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.11s 2025-05-25 01:05:16.486913 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.07s 2025-05-25 01:05:16.487136 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.98s 2025-05-25 01:05:16.487149 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.65s 2025-05-25 01:05:16.487160 | orchestrator | 2025-05-25 01:05:16.487171 | orchestrator | 2025-05-25 01:05:16.487181 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 01:05:16.487192 | orchestrator | 2025-05-25 01:05:16.487203 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 01:05:16.487291 | orchestrator | Sunday 25 May 2025 01:03:15 +0000 (0:00:00.329) 0:00:00.329 ************ 2025-05-25 01:05:16.487304 | orchestrator | ok: [testbed-manager] 2025-05-25 01:05:16.487315 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:05:16.487326 | orchestrator | ok: [testbed-node-1] 2025-05-25 01:05:16.487337 | orchestrator | ok: [testbed-node-2] 2025-05-25 01:05:16.487359 | orchestrator | ok: [testbed-node-3] 2025-05-25 01:05:16.487370 | orchestrator | ok: [testbed-node-4] 2025-05-25 01:05:16.487381 | orchestrator | ok: [testbed-node-5] 2025-05-25 01:05:16.487392 | orchestrator | 2025-05-25 01:05:16.487403 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 01:05:16.487414 | orchestrator | Sunday 25 May 2025 01:03:16 +0000 (0:00:01.117) 0:00:01.447 ************ 2025-05-25 01:05:16.487425 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-25 01:05:16.487436 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-25 01:05:16.487446 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-25 01:05:16.487457 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-25 01:05:16.487468 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-25 01:05:16.487478 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-25 01:05:16.487489 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-25 01:05:16.487500 | orchestrator | 2025-05-25 01:05:16.487511 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-25 01:05:16.487522 | orchestrator | 2025-05-25 01:05:16.487532 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-25 01:05:16.487543 | orchestrator | Sunday 25 May 2025 01:03:17 +0000 (0:00:01.119) 0:00:02.566 ************ 2025-05-25 01:05:16.487554 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 01:05:16.487567 | orchestrator | 2025-05-25 01:05:16.487577 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-25 01:05:16.487602 | orchestrator | Sunday 25 May 2025 01:03:18 +0000 (0:00:01.515) 0:00:04.082 ************ 2025-05-25 01:05:16.487624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.487641 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 01:05:16.487654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.487686 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.487699 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.487712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.487724 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.487741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.487753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.487772 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.487791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.487802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.487812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.487827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.487837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.487855 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 01:05:16.487873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.487884 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.487900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.487911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.487927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.487943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.487954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.487964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.487975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.487994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488020 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.488067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488104 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.488120 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.488138 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.488182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.488232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.488263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488273 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.488283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.488324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.488341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.488353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.488365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.488399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.488417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.488474 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.488489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.488506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.488538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.488549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.488560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.488590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488601 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.488618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.488638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.488653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.488670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.488681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.488707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488717 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.488727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.488749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.488798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.488820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.488830 | orchestrator | 2025-05-25 01:05:16.488840 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-25 01:05:16.488850 | orchestrator | Sunday 25 May 2025 01:03:23 +0000 (0:00:04.441) 0:00:08.523 ************ 2025-05-25 01:05:16.488860 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-25 01:05:16.488870 | orchestrator | 2025-05-25 01:05:16.488879 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-25 01:05:16.488889 | orchestrator | Sunday 25 May 2025 01:03:24 +0000 (0:00:01.685) 0:00:10.209 ************ 2025-05-25 01:05:16.488911 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 01:05:16.488922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.488933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.488943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.488969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.488979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.488989 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.489025 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.489040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.489051 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.489062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.489078 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.489088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.489098 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.489115 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.489125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.489140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.489154 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.489171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.489195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.489236 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 01:05:16.489264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.489282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.489306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.489323 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.489342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.489388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.489401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.489419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.489429 | orchestrator | 2025-05-25 01:05:16.489440 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-25 01:05:16.489460 | orchestrator | Sunday 25 May 2025 01:03:30 +0000 (0:00:05.980) 0:00:16.190 ************ 2025-05-25 01:05:16.489471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.489486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.489497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.489507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.489523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.489533 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.489549 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.489560 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.489575 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.489586 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.489596 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:05:16.489606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.489622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.489638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.489648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.489658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.489668 | orchestrator | skipping: [testbed-manager] 2025-05-25 01:05:16.489678 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:05:16.489692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.489702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.489712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.489727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.489747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.489757 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:05:16.489768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.489778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.489792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.489803 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.489813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.489823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.489833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.489849 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.489864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.489875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.489885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.489895 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.489905 | orchestrator | 2025-05-25 01:05:16.489914 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-25 01:05:16.489924 | orchestrator | Sunday 25 May 2025 01:03:33 +0000 (0:00:02.070) 0:00:18.261 ************ 2025-05-25 01:05:16.489939 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.489950 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.489960 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.489981 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.489992 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.490013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.490092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.490125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.490155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490170 | orchestrator | skipping: [testbed-manager] 2025-05-25 01:05:16.490180 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:05:16.490190 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:05:16.490200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.490246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.490292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490302 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:05:16.490312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.490321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.490346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.490364 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.490413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.490441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.490479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.490498 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.490514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-25 01:05:16.490538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.490558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.490575 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.490592 | orchestrator | 2025-05-25 01:05:16.490608 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-25 01:05:16.490626 | orchestrator | Sunday 25 May 2025 01:03:35 +0000 (0:00:02.718) 0:00:20.980 ************ 2025-05-25 01:05:16.490652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.490682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.490711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'2025-05-25 01:05:16 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:16.490730 | orchestrator | ], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.490750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.490768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.490792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.490816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.490833 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 01:05:16.490845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.490855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.490865 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.490875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.490920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.490935 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.490946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490956 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490976 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.490996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.491007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.491043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.491054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.491070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.491086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491111 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.491122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.491132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.491142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.491163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.491174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.491199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.491295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.491305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.491343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.491354 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 01:05:16.491372 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.491387 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.491406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.491429 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.491438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.491475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.491490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.491516 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.491537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.491582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.491592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.491600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.491616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.491630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.491638 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.491650 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.491659 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.491680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.491702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.491719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.491735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.491757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.491789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.491797 | orchestrator | 2025-05-25 01:05:16.491805 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-25 01:05:16.491813 | orchestrator | Sunday 25 May 2025 01:03:42 +0000 (0:00:06.506) 0:00:27.487 ************ 2025-05-25 01:05:16.491822 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 01:05:16.491830 | orchestrator | 2025-05-25 01:05:16.491838 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-25 01:05:16.491846 | orchestrator | Sunday 25 May 2025 01:03:42 +0000 (0:00:00.587) 0:00:28.074 ************ 2025-05-25 01:05:16.491887 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326836, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.491953 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326836, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.491969 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326836, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.491990 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326836, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492013 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326836, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492025 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326843, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492037 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326836, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492056 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326843, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492070 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326836, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 01:05:16.492084 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326843, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492099 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326838, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492119 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326843, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492128 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326843, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492136 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326843, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492144 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326838, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492157 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326842, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492165 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326838, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492174 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326838, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492192 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326838, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492405 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326838, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492421 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326842, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492430 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326842, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492443 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326867, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492452 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326842, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492466 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326842, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492475 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326867, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492488 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326842, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492497 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326846, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492505 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326867, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492517 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326867, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492526 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326843, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 01:05:16.492544 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326846, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492552 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326867, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492564 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326867, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492573 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326840, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492581 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326846, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492593 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326846, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492602 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326840, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492615 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326846, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492623 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326845, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492635 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326846, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492643 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326845, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492652 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326840, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492664 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326840, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492672 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326840, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492686 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326866, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492694 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326866, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492706 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326840, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492714 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326839, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492723 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326845, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492735 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326845, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492747 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326845, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492756 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326839, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492764 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326838, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 01:05:16.492772 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326866, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492785 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326851, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2203062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492793 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:05:16.492802 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326845, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492814 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326851, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2203062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492827 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.492835 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326866, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492844 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326866, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492852 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326839, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492863 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326866, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492872 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326851, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2203062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492880 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.492888 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326839, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492905 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326839, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492913 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326839, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492922 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326851, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2203062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492929 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:05:16.492938 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326851, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2203062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492946 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:05:16.492958 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326851, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2203062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-25 01:05:16.492967 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.492975 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326842, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 01:05:16.492983 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326867, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 01:05:16.493000 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326846, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 01:05:16.493009 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326840, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.218306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 01:05:16.493017 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326845, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2193062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 01:05:16.493026 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326866, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2223063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 01:05:16.493037 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326839, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2173061, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 01:05:16.493046 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326851, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.2203062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-25 01:05:16.493059 | orchestrator | 2025-05-25 01:05:16.493067 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-25 01:05:16.493075 | orchestrator | Sunday 25 May 2025 01:04:14 +0000 (0:00:31.181) 0:00:59.256 ************ 2025-05-25 01:05:16.493084 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 01:05:16.493092 | orchestrator | 2025-05-25 01:05:16.493099 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-25 01:05:16.493108 | orchestrator | Sunday 25 May 2025 01:04:14 +0000 (0:00:00.414) 0:00:59.671 ************ 2025-05-25 01:05:16.493116 | orchestrator | [WARNING]: Skipped 2025-05-25 01:05:16.493124 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493132 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-25 01:05:16.493140 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493153 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-25 01:05:16.493161 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 01:05:16.493169 | orchestrator | [WARNING]: Skipped 2025-05-25 01:05:16.493176 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493184 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-25 01:05:16.493192 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493200 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-25 01:05:16.493256 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 01:05:16.493266 | orchestrator | [WARNING]: Skipped 2025-05-25 01:05:16.493274 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493281 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-25 01:05:16.493289 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493297 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-25 01:05:16.493305 | orchestrator | [WARNING]: Skipped 2025-05-25 01:05:16.493313 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493322 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-25 01:05:16.493334 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493347 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-25 01:05:16.493357 | orchestrator | [WARNING]: Skipped 2025-05-25 01:05:16.493365 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493373 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-25 01:05:16.493381 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493388 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-25 01:05:16.493396 | orchestrator | [WARNING]: Skipped 2025-05-25 01:05:16.493404 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493412 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-25 01:05:16.493419 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493427 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-25 01:05:16.493435 | orchestrator | [WARNING]: Skipped 2025-05-25 01:05:16.493443 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493450 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-25 01:05:16.493458 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-25 01:05:16.493466 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-25 01:05:16.493473 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-25 01:05:16.493481 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-25 01:05:16.493497 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-25 01:05:16.493505 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-25 01:05:16.493513 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-25 01:05:16.493522 | orchestrator | 2025-05-25 01:05:16.493535 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-25 01:05:16.493548 | orchestrator | Sunday 25 May 2025 01:04:15 +0000 (0:00:01.323) 0:01:00.994 ************ 2025-05-25 01:05:16.493568 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-25 01:05:16.493583 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:05:16.493596 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-25 01:05:16.493616 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:05:16.493629 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-25 01:05:16.493641 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.493654 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-25 01:05:16.493668 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:05:16.493681 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-25 01:05:16.493694 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.493708 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-25 01:05:16.493720 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.493731 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-25 01:05:16.493743 | orchestrator | 2025-05-25 01:05:16.493750 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-25 01:05:16.493757 | orchestrator | Sunday 25 May 2025 01:04:28 +0000 (0:00:12.928) 0:01:13.922 ************ 2025-05-25 01:05:16.493764 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-25 01:05:16.493770 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:05:16.493777 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-25 01:05:16.493783 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:05:16.493790 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-25 01:05:16.493796 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:05:16.493808 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-25 01:05:16.493815 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.493821 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-25 01:05:16.493828 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.493834 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-25 01:05:16.493841 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.493848 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-25 01:05:16.493854 | orchestrator | 2025-05-25 01:05:16.493861 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-25 01:05:16.493868 | orchestrator | Sunday 25 May 2025 01:04:32 +0000 (0:00:04.280) 0:01:18.203 ************ 2025-05-25 01:05:16.493874 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-25 01:05:16.493881 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:05:16.493888 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-25 01:05:16.493902 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:05:16.493909 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-25 01:05:16.493916 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.493923 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-25 01:05:16.493929 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.493936 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-25 01:05:16.493942 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:05:16.493949 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-25 01:05:16.493958 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.493969 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-25 01:05:16.493976 | orchestrator | 2025-05-25 01:05:16.493983 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-25 01:05:16.493989 | orchestrator | Sunday 25 May 2025 01:04:36 +0000 (0:00:03.352) 0:01:21.556 ************ 2025-05-25 01:05:16.493996 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 01:05:16.494002 | orchestrator | 2025-05-25 01:05:16.494009 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-25 01:05:16.494052 | orchestrator | Sunday 25 May 2025 01:04:36 +0000 (0:00:00.540) 0:01:22.097 ************ 2025-05-25 01:05:16.494061 | orchestrator | skipping: [testbed-manager] 2025-05-25 01:05:16.494068 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:05:16.494075 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:05:16.494081 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:05:16.494088 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.494094 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.494101 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.494107 | orchestrator | 2025-05-25 01:05:16.494114 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-25 01:05:16.494121 | orchestrator | Sunday 25 May 2025 01:04:37 +0000 (0:00:00.763) 0:01:22.860 ************ 2025-05-25 01:05:16.494128 | orchestrator | skipping: [testbed-manager] 2025-05-25 01:05:16.494134 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.494148 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.494160 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.494171 | orchestrator | changed: [testbed-node-0] 2025-05-25 01:05:16.494182 | orchestrator | changed: [testbed-node-1] 2025-05-25 01:05:16.494193 | orchestrator | changed: [testbed-node-2] 2025-05-25 01:05:16.494222 | orchestrator | 2025-05-25 01:05:16.494232 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-25 01:05:16.494239 | orchestrator | Sunday 25 May 2025 01:04:41 +0000 (0:00:03.620) 0:01:26.480 ************ 2025-05-25 01:05:16.494246 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 01:05:16.494252 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:05:16.494259 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 01:05:16.494266 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:05:16.494272 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 01:05:16.494279 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:05:16.494286 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 01:05:16.494292 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.494299 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 01:05:16.494313 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.494320 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 01:05:16.494326 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.494333 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-25 01:05:16.494340 | orchestrator | skipping: [testbed-manager] 2025-05-25 01:05:16.494346 | orchestrator | 2025-05-25 01:05:16.494353 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-25 01:05:16.494363 | orchestrator | Sunday 25 May 2025 01:04:44 +0000 (0:00:03.032) 0:01:29.512 ************ 2025-05-25 01:05:16.494370 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-25 01:05:16.494377 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:05:16.494384 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-25 01:05:16.494390 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:05:16.494397 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-25 01:05:16.494404 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:05:16.494410 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-25 01:05:16.494417 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.494423 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-25 01:05:16.494430 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.494437 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-25 01:05:16.494443 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.494450 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-25 01:05:16.494457 | orchestrator | 2025-05-25 01:05:16.494463 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-25 01:05:16.494470 | orchestrator | Sunday 25 May 2025 01:04:47 +0000 (0:00:03.455) 0:01:32.968 ************ 2025-05-25 01:05:16.494476 | orchestrator | [WARNING]: Skipped 2025-05-25 01:05:16.494483 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-25 01:05:16.494490 | orchestrator | due to this access issue: 2025-05-25 01:05:16.494496 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-25 01:05:16.494503 | orchestrator | not a directory 2025-05-25 01:05:16.494510 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-25 01:05:16.494516 | orchestrator | 2025-05-25 01:05:16.494523 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-25 01:05:16.494529 | orchestrator | Sunday 25 May 2025 01:04:49 +0000 (0:00:01.736) 0:01:34.705 ************ 2025-05-25 01:05:16.494536 | orchestrator | skipping: [testbed-manager] 2025-05-25 01:05:16.494543 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:05:16.494549 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:05:16.494556 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:05:16.494562 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.494569 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.494576 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.494582 | orchestrator | 2025-05-25 01:05:16.494589 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-25 01:05:16.494595 | orchestrator | Sunday 25 May 2025 01:04:50 +0000 (0:00:01.004) 0:01:35.710 ************ 2025-05-25 01:05:16.494602 | orchestrator | skipping: [testbed-manager] 2025-05-25 01:05:16.494614 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:05:16.494624 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:05:16.494642 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:05:16.494654 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.494665 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.494676 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.494687 | orchestrator | 2025-05-25 01:05:16.494698 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-25 01:05:16.494709 | orchestrator | Sunday 25 May 2025 01:04:51 +0000 (0:00:00.961) 0:01:36.671 ************ 2025-05-25 01:05:16.494731 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-25 01:05:16.494743 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:05:16.494753 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-25 01:05:16.494763 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:05:16.494773 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-25 01:05:16.494783 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:05:16.494793 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-25 01:05:16.494803 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.494812 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-25 01:05:16.494822 | orchestrator | skipping: [testbed-manager] 2025-05-25 01:05:16.494832 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-25 01:05:16.494842 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.494852 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-25 01:05:16.494863 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.494873 | orchestrator | 2025-05-25 01:05:16.494883 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-25 01:05:16.494893 | orchestrator | Sunday 25 May 2025 01:04:53 +0000 (0:00:02.460) 0:01:39.131 ************ 2025-05-25 01:05:16.494904 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-25 01:05:16.494914 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:05:16.494930 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-25 01:05:16.494941 | orchestrator | skipping: [testbed-node-4] 2025-05-25 01:05:16.494952 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-25 01:05:16.494968 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:05:16.494980 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-25 01:05:16.494992 | orchestrator | skipping: [testbed-node-3] 2025-05-25 01:05:16.495002 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-25 01:05:16.495013 | orchestrator | skipping: [testbed-node-5] 2025-05-25 01:05:16.495025 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-25 01:05:16.495036 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:05:16.495048 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-25 01:05:16.495058 | orchestrator | skipping: [testbed-manager] 2025-05-25 01:05:16.495069 | orchestrator | 2025-05-25 01:05:16.495076 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-25 01:05:16.495083 | orchestrator | Sunday 25 May 2025 01:04:56 +0000 (0:00:02.944) 0:01:42.076 ************ 2025-05-25 01:05:16.495092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.495108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.495122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.495135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.495142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.495149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-25 01:05:16.495162 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-25 01:05:16.495175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.495183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.495190 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.495200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.495257 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.495264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495284 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.495291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495302 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495309 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-25 01:05:16.495321 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495328 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.495346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.495354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.495365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.495381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.495389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.495415 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.495426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.495440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.495447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.495472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.495483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.495495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495509 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.495521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.495528 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.495535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.495545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.495557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.495571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495584 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-25 01:05:16.495596 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.495612 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.495641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.495670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.495680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.495696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.495704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-25 01:05:16.495711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.495726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-25 01:05:16.495739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.495765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-25 01:05:16.495783 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.495793 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.495806 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.495836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.495871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.495890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.495914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-25 01:05:16.495943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-25 01:05:16.495979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-25 01:05:16.495986 | orchestrator | 2025-05-25 01:05:16.495994 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-25 01:05:16.496005 | orchestrator | Sunday 25 May 2025 01:05:01 +0000 (0:00:04.773) 0:01:46.850 ************ 2025-05-25 01:05:16.496017 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: pymysql.err.OperationalError: (9001, 'Max connect timeout reached while reaching hostgroup 0 after 10000ms') 2025-05-25 01:05:16.496037 | orchestrator | failed: [testbed-manager] (item=testbed-node-0) => {"action": "mysql_user", "ansible_loop_var": "item", "changed": false, "item": {"key": "0", "value": {"hosts": ["testbed-node-0", "testbed-node-1", "testbed-node-2"]}}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1748135103.3717864-2065-94597948750254/AnsiballZ_mysql_user.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1748135103.3717864-2065-94597948750254/AnsiballZ_mysql_user.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1748135103.3717864-2065-94597948750254/AnsiballZ_mysql_user.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.community.mysql.plugins.modules.mysql_user', init_globals=dict(_module_fqn='ansible_collections.community.mysql.plugins.modules.mysql_user', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_mysql_user_payload_ur92n8m2/ansible_mysql_user_payload.zip/ansible_collections/community/mysql/plugins/modules/mysql_user.py\", line 482, in \n File \"/tmp/ansible_mysql_user_payload_ur92n8m2/ansible_mysql_user_payload.zip/ansible_collections/community/mysql/plugins/modules/mysql_user.py\", line 428, in main\n File \"/tmp/ansible_mysql_user_payload_ur92n8m2/ansible_mysql_user_payload.zip/ansible_collections/community/mysql/plugins/module_utils/user.py\", line 901, in get_impl\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/cursors.py\", line 153, in execute\n result = self._query(query)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/cursors.py\", line 322, in _query\n conn.query(q)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 558, in query\n self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 822, in _read_query_result\n result.read()\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 1200, in read\n first_packet = self.connection._read_packet()\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 772, in _read_packet\n packet.raise_for_error()\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/protocol.py\", line 221, in raise_for_error\n err.raise_mysql_exception(self._data)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/err.py\", line 143, in raise_mysql_exception\n raise errorclass(errno, errval)\npymysql.err.OperationalError: (9001, 'Max connect timeout reached while reaching hostgroup 0 after 10000ms')\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-05-25 01:05:16.496057 | orchestrator | 2025-05-25 01:05:16.496069 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 01:05:16.496080 | orchestrator | testbed-manager : ok=18  changed=9  unreachable=0 failed=1  skipped=9  rescued=0 ignored=0 2025-05-25 01:05:16.496097 | orchestrator | testbed-node-0 : ok=10  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-25 01:05:16.496110 | orchestrator | testbed-node-1 : ok=10  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-25 01:05:16.496122 | orchestrator | testbed-node-2 : ok=10  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-25 01:05:16.496132 | orchestrator | testbed-node-3 : ok=9  changed=4  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-25 01:05:16.496145 | orchestrator | testbed-node-4 : ok=9  changed=4  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-25 01:05:16.496156 | orchestrator | testbed-node-5 : ok=9  changed=4  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-25 01:05:16.496166 | orchestrator | 2025-05-25 01:05:16.496177 | orchestrator | 2025-05-25 01:05:16.496188 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 01:05:16.496199 | orchestrator | Sunday 25 May 2025 01:05:14 +0000 (0:00:12.841) 0:01:59.691 ************ 2025-05-25 01:05:16.496264 | orchestrator | =============================================================================== 2025-05-25 01:05:16.496277 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 31.18s 2025-05-25 01:05:16.496285 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 12.93s 2025-05-25 01:05:16.496291 | orchestrator | prometheus : Creating prometheus database user and setting permissions -- 12.84s 2025-05-25 01:05:16.496298 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.51s 2025-05-25 01:05:16.496304 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.98s 2025-05-25 01:05:16.496311 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.77s 2025-05-25 01:05:16.496317 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.44s 2025-05-25 01:05:16.496324 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.28s 2025-05-25 01:05:16.496330 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.62s 2025-05-25 01:05:16.496337 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 3.46s 2025-05-25 01:05:16.496343 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.35s 2025-05-25 01:05:16.496350 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.03s 2025-05-25 01:05:16.496356 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 2.94s 2025-05-25 01:05:16.496369 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.72s 2025-05-25 01:05:16.496376 | orchestrator | prometheus : Copying over prometheus msteams config file ---------------- 2.46s 2025-05-25 01:05:16.496382 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.07s 2025-05-25 01:05:16.496389 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 1.74s 2025-05-25 01:05:16.496396 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.69s 2025-05-25 01:05:16.496402 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.52s 2025-05-25 01:05:16.496409 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.32s 2025-05-25 01:05:16.496420 | orchestrator | 2025-05-25 01:05:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:16.496427 | orchestrator | 2025-05-25 01:05:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:16.496434 | orchestrator | 2025-05-25 01:05:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:16.496441 | orchestrator | 2025-05-25 01:05:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:19.543548 | orchestrator | 2025-05-25 01:05:19 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:19.545149 | orchestrator | 2025-05-25 01:05:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:19.548354 | orchestrator | 2025-05-25 01:05:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:19.549870 | orchestrator | 2025-05-25 01:05:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:19.549899 | orchestrator | 2025-05-25 01:05:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:22.602646 | orchestrator | 2025-05-25 01:05:22 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:22.605618 | orchestrator | 2025-05-25 01:05:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:22.608109 | orchestrator | 2025-05-25 01:05:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:22.610152 | orchestrator | 2025-05-25 01:05:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:22.610283 | orchestrator | 2025-05-25 01:05:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:25.661414 | orchestrator | 2025-05-25 01:05:25 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:25.662703 | orchestrator | 2025-05-25 01:05:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:25.665665 | orchestrator | 2025-05-25 01:05:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:25.667112 | orchestrator | 2025-05-25 01:05:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:25.667143 | orchestrator | 2025-05-25 01:05:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:28.708562 | orchestrator | 2025-05-25 01:05:28 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:28.709282 | orchestrator | 2025-05-25 01:05:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:28.711494 | orchestrator | 2025-05-25 01:05:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:28.714946 | orchestrator | 2025-05-25 01:05:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:28.715298 | orchestrator | 2025-05-25 01:05:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:31.767036 | orchestrator | 2025-05-25 01:05:31 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:31.770194 | orchestrator | 2025-05-25 01:05:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:31.771524 | orchestrator | 2025-05-25 01:05:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:31.772771 | orchestrator | 2025-05-25 01:05:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:31.773155 | orchestrator | 2025-05-25 01:05:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:34.821288 | orchestrator | 2025-05-25 01:05:34 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:34.821659 | orchestrator | 2025-05-25 01:05:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:34.824850 | orchestrator | 2025-05-25 01:05:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:34.826996 | orchestrator | 2025-05-25 01:05:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:34.827031 | orchestrator | 2025-05-25 01:05:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:37.875662 | orchestrator | 2025-05-25 01:05:37 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:37.877074 | orchestrator | 2025-05-25 01:05:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:37.878090 | orchestrator | 2025-05-25 01:05:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:37.879120 | orchestrator | 2025-05-25 01:05:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:37.879148 | orchestrator | 2025-05-25 01:05:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:40.926550 | orchestrator | 2025-05-25 01:05:40 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:40.927038 | orchestrator | 2025-05-25 01:05:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:40.927955 | orchestrator | 2025-05-25 01:05:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:40.930155 | orchestrator | 2025-05-25 01:05:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:40.930178 | orchestrator | 2025-05-25 01:05:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:43.985937 | orchestrator | 2025-05-25 01:05:43 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:43.987415 | orchestrator | 2025-05-25 01:05:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:43.991083 | orchestrator | 2025-05-25 01:05:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:43.994278 | orchestrator | 2025-05-25 01:05:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:43.994308 | orchestrator | 2025-05-25 01:05:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:47.045972 | orchestrator | 2025-05-25 01:05:47 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:47.049729 | orchestrator | 2025-05-25 01:05:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:47.051677 | orchestrator | 2025-05-25 01:05:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:47.052604 | orchestrator | 2025-05-25 01:05:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:47.052628 | orchestrator | 2025-05-25 01:05:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:50.103293 | orchestrator | 2025-05-25 01:05:50 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:50.104846 | orchestrator | 2025-05-25 01:05:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:50.106166 | orchestrator | 2025-05-25 01:05:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:50.107318 | orchestrator | 2025-05-25 01:05:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:50.107680 | orchestrator | 2025-05-25 01:05:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:53.155947 | orchestrator | 2025-05-25 01:05:53 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:53.157912 | orchestrator | 2025-05-25 01:05:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:53.159314 | orchestrator | 2025-05-25 01:05:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:53.161561 | orchestrator | 2025-05-25 01:05:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:53.161614 | orchestrator | 2025-05-25 01:05:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:56.218900 | orchestrator | 2025-05-25 01:05:56 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:56.220720 | orchestrator | 2025-05-25 01:05:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:56.222465 | orchestrator | 2025-05-25 01:05:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:56.224443 | orchestrator | 2025-05-25 01:05:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:56.226172 | orchestrator | 2025-05-25 01:05:56 | INFO  | Task 554752e7-aa8c-452a-a69c-3961bc8e9ec2 is in state STARTED 2025-05-25 01:05:56.226188 | orchestrator | 2025-05-25 01:05:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:05:59.280079 | orchestrator | 2025-05-25 01:05:59 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:05:59.280991 | orchestrator | 2025-05-25 01:05:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:05:59.283164 | orchestrator | 2025-05-25 01:05:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:05:59.284921 | orchestrator | 2025-05-25 01:05:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:05:59.286481 | orchestrator | 2025-05-25 01:05:59 | INFO  | Task 554752e7-aa8c-452a-a69c-3961bc8e9ec2 is in state STARTED 2025-05-25 01:05:59.286783 | orchestrator | 2025-05-25 01:05:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:02.332171 | orchestrator | 2025-05-25 01:06:02 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:06:02.336759 | orchestrator | 2025-05-25 01:06:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:02.339108 | orchestrator | 2025-05-25 01:06:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:02.340976 | orchestrator | 2025-05-25 01:06:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:02.342407 | orchestrator | 2025-05-25 01:06:02 | INFO  | Task 554752e7-aa8c-452a-a69c-3961bc8e9ec2 is in state STARTED 2025-05-25 01:06:02.342607 | orchestrator | 2025-05-25 01:06:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:05.386905 | orchestrator | 2025-05-25 01:06:05 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:06:05.387529 | orchestrator | 2025-05-25 01:06:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:05.388284 | orchestrator | 2025-05-25 01:06:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:05.389152 | orchestrator | 2025-05-25 01:06:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:05.390002 | orchestrator | 2025-05-25 01:06:05 | INFO  | Task 554752e7-aa8c-452a-a69c-3961bc8e9ec2 is in state SUCCESS 2025-05-25 01:06:05.390079 | orchestrator | 2025-05-25 01:06:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:08.438557 | orchestrator | 2025-05-25 01:06:08 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:06:08.439934 | orchestrator | 2025-05-25 01:06:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:08.443559 | orchestrator | 2025-05-25 01:06:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:08.444820 | orchestrator | 2025-05-25 01:06:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:08.444896 | orchestrator | 2025-05-25 01:06:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:11.494111 | orchestrator | 2025-05-25 01:06:11 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:06:11.494582 | orchestrator | 2025-05-25 01:06:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:11.496043 | orchestrator | 2025-05-25 01:06:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:11.497585 | orchestrator | 2025-05-25 01:06:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:11.497659 | orchestrator | 2025-05-25 01:06:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:14.557318 | orchestrator | 2025-05-25 01:06:14 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state STARTED 2025-05-25 01:06:14.558643 | orchestrator | 2025-05-25 01:06:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:14.562383 | orchestrator | 2025-05-25 01:06:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:14.565039 | orchestrator | 2025-05-25 01:06:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:14.565085 | orchestrator | 2025-05-25 01:06:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:17.619838 | orchestrator | 2025-05-25 01:06:17.619938 | orchestrator | None 2025-05-25 01:06:17.619954 | orchestrator | 2025-05-25 01:06:17.620023 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-25 01:06:17.620040 | orchestrator | 2025-05-25 01:06:17.620052 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-25 01:06:17.620064 | orchestrator | Sunday 25 May 2025 01:05:17 +0000 (0:00:00.291) 0:00:00.291 ************ 2025-05-25 01:06:17.620075 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:06:17.620088 | orchestrator | ok: [testbed-node-1] 2025-05-25 01:06:17.620100 | orchestrator | ok: [testbed-node-2] 2025-05-25 01:06:17.620111 | orchestrator | 2025-05-25 01:06:17.620122 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-25 01:06:17.620263 | orchestrator | Sunday 25 May 2025 01:05:18 +0000 (0:00:00.359) 0:00:00.651 ************ 2025-05-25 01:06:17.620280 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-25 01:06:17.620292 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-25 01:06:17.620303 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-25 01:06:17.620314 | orchestrator | 2025-05-25 01:06:17.620325 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-25 01:06:17.620336 | orchestrator | 2025-05-25 01:06:17.620347 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-25 01:06:17.620358 | orchestrator | Sunday 25 May 2025 01:05:18 +0000 (0:00:00.283) 0:00:00.934 ************ 2025-05-25 01:06:17.620372 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 01:06:17.620386 | orchestrator | 2025-05-25 01:06:17.620399 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-25 01:06:17.620412 | orchestrator | Sunday 25 May 2025 01:05:19 +0000 (0:00:00.646) 0:00:01.581 ************ 2025-05-25 01:06:17.620592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.620609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.620622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.620634 | orchestrator | 2025-05-25 01:06:17.620645 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-25 01:06:17.620656 | orchestrator | Sunday 25 May 2025 01:05:19 +0000 (0:00:00.775) 0:00:02.356 ************ 2025-05-25 01:06:17.620667 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-25 01:06:17.620678 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-25 01:06:17.620689 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 01:06:17.620701 | orchestrator | 2025-05-25 01:06:17.620712 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-25 01:06:17.620723 | orchestrator | Sunday 25 May 2025 01:05:20 +0000 (0:00:00.529) 0:00:02.886 ************ 2025-05-25 01:06:17.620744 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-25 01:06:17.620755 | orchestrator | 2025-05-25 01:06:17.620766 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-25 01:06:17.620777 | orchestrator | Sunday 25 May 2025 01:05:20 +0000 (0:00:00.579) 0:00:03.465 ************ 2025-05-25 01:06:17.620808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.620820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.620837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.620849 | orchestrator | 2025-05-25 01:06:17.620860 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-25 01:06:17.620871 | orchestrator | Sunday 25 May 2025 01:05:22 +0000 (0:00:01.325) 0:00:04.791 ************ 2025-05-25 01:06:17.620883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 01:06:17.620894 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:06:17.620906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 01:06:17.620924 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:06:17.620945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 01:06:17.620956 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:06:17.620968 | orchestrator | 2025-05-25 01:06:17.620985 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-25 01:06:17.621003 | orchestrator | Sunday 25 May 2025 01:05:22 +0000 (0:00:00.661) 0:00:05.453 ************ 2025-05-25 01:06:17.621020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 01:06:17.621040 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:06:17.621066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 01:06:17.621087 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:06:17.621101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-25 01:06:17.621113 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:06:17.621124 | orchestrator | 2025-05-25 01:06:17.621135 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-25 01:06:17.621146 | orchestrator | Sunday 25 May 2025 01:05:23 +0000 (0:00:00.663) 0:00:06.116 ************ 2025-05-25 01:06:17.621157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.621183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.621195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.621207 | orchestrator | 2025-05-25 01:06:17.621218 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-25 01:06:17.621229 | orchestrator | Sunday 25 May 2025 01:05:24 +0000 (0:00:01.362) 0:00:07.478 ************ 2025-05-25 01:06:17.621240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.621288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.621300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.621320 | orchestrator | 2025-05-25 01:06:17.621331 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-25 01:06:17.621342 | orchestrator | Sunday 25 May 2025 01:05:26 +0000 (0:00:01.483) 0:00:08.962 ************ 2025-05-25 01:06:17.621353 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:06:17.621363 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:06:17.621374 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:06:17.621385 | orchestrator | 2025-05-25 01:06:17.621396 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-25 01:06:17.621407 | orchestrator | Sunday 25 May 2025 01:05:26 +0000 (0:00:00.260) 0:00:09.222 ************ 2025-05-25 01:06:17.621417 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-25 01:06:17.621429 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-25 01:06:17.621439 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-25 01:06:17.621450 | orchestrator | 2025-05-25 01:06:17.621461 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-25 01:06:17.621471 | orchestrator | Sunday 25 May 2025 01:05:28 +0000 (0:00:01.348) 0:00:10.571 ************ 2025-05-25 01:06:17.621483 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-25 01:06:17.621500 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-25 01:06:17.621511 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-25 01:06:17.621522 | orchestrator | 2025-05-25 01:06:17.621533 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-25 01:06:17.621544 | orchestrator | Sunday 25 May 2025 01:05:29 +0000 (0:00:01.376) 0:00:11.947 ************ 2025-05-25 01:06:17.621555 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-25 01:06:17.621566 | orchestrator | 2025-05-25 01:06:17.621577 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-25 01:06:17.621587 | orchestrator | Sunday 25 May 2025 01:05:29 +0000 (0:00:00.425) 0:00:12.373 ************ 2025-05-25 01:06:17.621598 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-25 01:06:17.621609 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-25 01:06:17.621620 | orchestrator | ok: [testbed-node-0] 2025-05-25 01:06:17.621630 | orchestrator | ok: [testbed-node-1] 2025-05-25 01:06:17.621641 | orchestrator | ok: [testbed-node-2] 2025-05-25 01:06:17.621652 | orchestrator | 2025-05-25 01:06:17.621663 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-25 01:06:17.621673 | orchestrator | Sunday 25 May 2025 01:05:30 +0000 (0:00:00.797) 0:00:13.171 ************ 2025-05-25 01:06:17.621684 | orchestrator | skipping: [testbed-node-0] 2025-05-25 01:06:17.621695 | orchestrator | skipping: [testbed-node-1] 2025-05-25 01:06:17.621706 | orchestrator | skipping: [testbed-node-2] 2025-05-25 01:06:17.621716 | orchestrator | 2025-05-25 01:06:17.621727 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-25 01:06:17.621738 | orchestrator | Sunday 25 May 2025 01:05:31 +0000 (0:00:00.400) 0:00:13.571 ************ 2025-05-25 01:06:17.621754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1326814, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1853058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.621773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1326814, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1853058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.621785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1326814, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1853058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.621797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1326809, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.180306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.621819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1326809, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.180306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.621832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1326809, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.180306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.621843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1326805, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.177306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.621865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1326805, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.177306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.621877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1326805, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.177306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.621888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1326812, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.181306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.621941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1326812, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.181306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.621955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1326812, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.181306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.621966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1326800, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1723058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.621995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1326800, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1723058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.622007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1326800, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1723058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.622073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1326806, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1783059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.622089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1326806, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1783059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.622108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1326806, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1783059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.622120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1326811, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.181306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.622143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1326811, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.181306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.622159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1326811, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.181306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.622171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1326799, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1723058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.622182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1326799, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1723058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.622201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1326799, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 12025-05-25 01:06:17 | INFO  | Task a6159ca2-3428-4c8f-96b4-a4daac145790 is in state SUCCESS 2025-05-25 01:06:17.622880 | orchestrator | 737057118.0, 'ctime': 1748132225.1723058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.622977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1326794, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1673057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1326794, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1673057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1326794, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1673057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1326801, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1733057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1326801, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1733057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1326801, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1733057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1326796, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1703057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1326796, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1703057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1326796, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1703057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1326810, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.181306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1326810, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.181306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1326810, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.181306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1326802, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.1753058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1326802, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.1753058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1326802, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.1753058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1326813, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1823058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1326813, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1823058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1326813, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1823058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1326798, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.171306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1326798, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.171306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1326798, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.171306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1326807, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1793058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1326807, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1793058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1326807, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1793058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1326795, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1693058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1326795, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1693058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1326795, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1693058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1326797, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.171306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1326797, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.171306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1326797, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.171306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1326803, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1763058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1326803, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1763058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1326803, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1763058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1326824, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.208306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1326824, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.208306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1326824, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.208306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1326822, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.199306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1326822, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.199306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1326822, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.199306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1326828, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2133062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1326828, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2133062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1326828, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2133062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1326816, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.186306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1326816, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.186306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1326816, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.186306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1326829, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.214306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1326829, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.214306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1326829, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.214306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1326825, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2103062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1326825, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2103062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1326825, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2103062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1326826, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2103062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1326826, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2103062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1326826, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2103062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1326817, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.187306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1326817, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.187306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1326817, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.187306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.623987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1326823, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.200306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1326823, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.200306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1326823, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.200306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1326830, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.215306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1326830, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.215306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1326830, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.215306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1326827, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.212306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1326827, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.212306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1326827, 'dev': 151, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748132225.212306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1326819, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1913059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1326819, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1913059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1326819, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.1913059, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1326818, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.187306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1326818, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.187306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1326818, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.187306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1326820, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.193306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1326820, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.193306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1326820, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.193306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1326821, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.199306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1326821, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.199306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1326821, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.199306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1326832, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2163062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1326832, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2163062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1326832, 'dev': 151, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748132225.2163062, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-25 01:06:17.624363 | orchestrator | 2025-05-25 01:06:17.624376 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-25 01:06:17.624389 | orchestrator | Sunday 25 May 2025 01:06:03 +0000 (0:00:32.497) 0:00:46.068 ************ 2025-05-25 01:06:17.624406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.624419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.624435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-25 01:06:17.624447 | orchestrator | 2025-05-25 01:06:17.624458 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-25 01:06:17.624470 | orchestrator | Sunday 25 May 2025 01:06:04 +0000 (0:00:01.038) 0:00:47.107 ************ 2025-05-25 01:06:17.624482 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: pymysql.err.OperationalError: (9001, 'Max connect timeout reached while reaching hostgroup 0 after 10000ms') 2025-05-25 01:06:17.624513 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "mysql_db", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1748135166.0959637-3758-158092355886589/AnsiballZ_mysql_db.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1748135166.0959637-3758-158092355886589/AnsiballZ_mysql_db.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1748135166.0959637-3758-158092355886589/AnsiballZ_mysql_db.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.community.mysql.plugins.modules.mysql_db', init_globals=dict(_module_fqn='ansible_collections.community.mysql.plugins.modules.mysql_db', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_mysql_db_payload_jh3mm4to/ansible_mysql_db_payload.zip/ansible_collections/community/mysql/plugins/modules/mysql_db.py\", line 725, in \n File \"/tmp/ansible_mysql_db_payload_jh3mm4to/ansible_mysql_db_payload.zip/ansible_collections/community/mysql/plugins/modules/mysql_db.py\", line 662, in main\n File \"/tmp/ansible_mysql_db_payload_jh3mm4to/ansible_mysql_db_payload.zip/ansible_collections/community/mysql/plugins/modules/mysql_db.py\", line 337, in db_exists\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/cursors.py\", line 153, in execute\n result = self._query(query)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/cursors.py\", line 322, in _query\n conn.query(q)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 558, in query\n self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 822, in _read_query_result\n result.read()\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 1200, in read\n first_packet = self.connection._read_packet()\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/connections.py\", line 772, in _read_packet\n packet.raise_for_error()\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/protocol.py\", line 221, in raise_for_error\n err.raise_mysql_exception(self._data)\n File \"/opt/ansible/lib/python3.10/site-packages/pymysql/err.py\", line 143, in raise_mysql_exception\n raise errorclass(errno, errval)\npymysql.err.OperationalError: (9001, 'Max connect timeout reached while reaching hostgroup 0 after 10000ms')\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-05-25 01:06:17.624527 | orchestrator | 2025-05-25 01:06:17.624539 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-25 01:06:17.624550 | orchestrator | testbed-node-0 : ok=15  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2025-05-25 01:06:17.624561 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-25 01:06:17.624579 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-25 01:06:17.624590 | orchestrator | 2025-05-25 01:06:17.624601 | orchestrator | 2025-05-25 01:06:17.624612 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-25 01:06:17.624623 | orchestrator | Sunday 25 May 2025 01:06:16 +0000 (0:00:12.214) 0:00:59.322 ************ 2025-05-25 01:06:17.624640 | orchestrator | =============================================================================== 2025-05-25 01:06:17.624651 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 32.50s 2025-05-25 01:06:17.624662 | orchestrator | grafana : Creating grafana database ------------------------------------ 12.21s 2025-05-25 01:06:17.624673 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.48s 2025-05-25 01:06:17.624683 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.38s 2025-05-25 01:06:17.624694 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.36s 2025-05-25 01:06:17.624705 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.35s 2025-05-25 01:06:17.624715 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.33s 2025-05-25 01:06:17.624726 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.04s 2025-05-25 01:06:17.624736 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.80s 2025-05-25 01:06:17.624747 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.78s 2025-05-25 01:06:17.624758 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.66s 2025-05-25 01:06:17.624768 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.66s 2025-05-25 01:06:17.624779 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.65s 2025-05-25 01:06:17.624790 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.58s 2025-05-25 01:06:17.624800 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.53s 2025-05-25 01:06:17.624811 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.43s 2025-05-25 01:06:17.624821 | orchestrator | grafana : Prune templated Grafana dashboards ---------------------------- 0.40s 2025-05-25 01:06:17.624832 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-05-25 01:06:17.624843 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.28s 2025-05-25 01:06:17.624853 | orchestrator | grafana : Copying over extra configuration file ------------------------- 0.26s 2025-05-25 01:06:17.624881 | orchestrator | 2025-05-25 01:06:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:17.624892 | orchestrator | 2025-05-25 01:06:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:17.626887 | orchestrator | 2025-05-25 01:06:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:17.626961 | orchestrator | 2025-05-25 01:06:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:20.673179 | orchestrator | 2025-05-25 01:06:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:20.674481 | orchestrator | 2025-05-25 01:06:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:20.677538 | orchestrator | 2025-05-25 01:06:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:20.677585 | orchestrator | 2025-05-25 01:06:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:23.724020 | orchestrator | 2025-05-25 01:06:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:23.725483 | orchestrator | 2025-05-25 01:06:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:23.726961 | orchestrator | 2025-05-25 01:06:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:23.726991 | orchestrator | 2025-05-25 01:06:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:26.775666 | orchestrator | 2025-05-25 01:06:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:26.776624 | orchestrator | 2025-05-25 01:06:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:26.779210 | orchestrator | 2025-05-25 01:06:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:26.779239 | orchestrator | 2025-05-25 01:06:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:29.828123 | orchestrator | 2025-05-25 01:06:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:29.829211 | orchestrator | 2025-05-25 01:06:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:29.831618 | orchestrator | 2025-05-25 01:06:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:29.831649 | orchestrator | 2025-05-25 01:06:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:32.877816 | orchestrator | 2025-05-25 01:06:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:32.879882 | orchestrator | 2025-05-25 01:06:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:32.881014 | orchestrator | 2025-05-25 01:06:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:32.881050 | orchestrator | 2025-05-25 01:06:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:35.925541 | orchestrator | 2025-05-25 01:06:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:35.927841 | orchestrator | 2025-05-25 01:06:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:35.930298 | orchestrator | 2025-05-25 01:06:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:35.930374 | orchestrator | 2025-05-25 01:06:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:38.977461 | orchestrator | 2025-05-25 01:06:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:38.978419 | orchestrator | 2025-05-25 01:06:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:38.979843 | orchestrator | 2025-05-25 01:06:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:38.979934 | orchestrator | 2025-05-25 01:06:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:42.030215 | orchestrator | 2025-05-25 01:06:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:42.031052 | orchestrator | 2025-05-25 01:06:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:42.032734 | orchestrator | 2025-05-25 01:06:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:42.032906 | orchestrator | 2025-05-25 01:06:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:45.076941 | orchestrator | 2025-05-25 01:06:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:45.078384 | orchestrator | 2025-05-25 01:06:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:45.080315 | orchestrator | 2025-05-25 01:06:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:45.080352 | orchestrator | 2025-05-25 01:06:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:48.130764 | orchestrator | 2025-05-25 01:06:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:48.132093 | orchestrator | 2025-05-25 01:06:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:48.133637 | orchestrator | 2025-05-25 01:06:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:48.133674 | orchestrator | 2025-05-25 01:06:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:51.175775 | orchestrator | 2025-05-25 01:06:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:51.178197 | orchestrator | 2025-05-25 01:06:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:51.180332 | orchestrator | 2025-05-25 01:06:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:51.180376 | orchestrator | 2025-05-25 01:06:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:54.229471 | orchestrator | 2025-05-25 01:06:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:54.231131 | orchestrator | 2025-05-25 01:06:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:54.233534 | orchestrator | 2025-05-25 01:06:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:54.233592 | orchestrator | 2025-05-25 01:06:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:06:57.280527 | orchestrator | 2025-05-25 01:06:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:06:57.282879 | orchestrator | 2025-05-25 01:06:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:06:57.285102 | orchestrator | 2025-05-25 01:06:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:06:57.285153 | orchestrator | 2025-05-25 01:06:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:00.333649 | orchestrator | 2025-05-25 01:07:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:00.334161 | orchestrator | 2025-05-25 01:07:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:00.335686 | orchestrator | 2025-05-25 01:07:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:00.335721 | orchestrator | 2025-05-25 01:07:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:03.385035 | orchestrator | 2025-05-25 01:07:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:03.386250 | orchestrator | 2025-05-25 01:07:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:03.387413 | orchestrator | 2025-05-25 01:07:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:03.387443 | orchestrator | 2025-05-25 01:07:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:06.434835 | orchestrator | 2025-05-25 01:07:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:06.436301 | orchestrator | 2025-05-25 01:07:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:06.437833 | orchestrator | 2025-05-25 01:07:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:06.437916 | orchestrator | 2025-05-25 01:07:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:09.486648 | orchestrator | 2025-05-25 01:07:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:09.488354 | orchestrator | 2025-05-25 01:07:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:09.489961 | orchestrator | 2025-05-25 01:07:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:09.489989 | orchestrator | 2025-05-25 01:07:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:12.544772 | orchestrator | 2025-05-25 01:07:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:12.546333 | orchestrator | 2025-05-25 01:07:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:12.547807 | orchestrator | 2025-05-25 01:07:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:12.547891 | orchestrator | 2025-05-25 01:07:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:15.591177 | orchestrator | 2025-05-25 01:07:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:15.592567 | orchestrator | 2025-05-25 01:07:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:15.594469 | orchestrator | 2025-05-25 01:07:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:15.594499 | orchestrator | 2025-05-25 01:07:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:18.643983 | orchestrator | 2025-05-25 01:07:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:18.646649 | orchestrator | 2025-05-25 01:07:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:18.650010 | orchestrator | 2025-05-25 01:07:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:18.650086 | orchestrator | 2025-05-25 01:07:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:21.692798 | orchestrator | 2025-05-25 01:07:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:21.692943 | orchestrator | 2025-05-25 01:07:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:21.693816 | orchestrator | 2025-05-25 01:07:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:21.693834 | orchestrator | 2025-05-25 01:07:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:24.752353 | orchestrator | 2025-05-25 01:07:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:24.752453 | orchestrator | 2025-05-25 01:07:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:24.752487 | orchestrator | 2025-05-25 01:07:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:24.752501 | orchestrator | 2025-05-25 01:07:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:27.788242 | orchestrator | 2025-05-25 01:07:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:27.788973 | orchestrator | 2025-05-25 01:07:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:27.790601 | orchestrator | 2025-05-25 01:07:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:27.790632 | orchestrator | 2025-05-25 01:07:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:30.840634 | orchestrator | 2025-05-25 01:07:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:30.841730 | orchestrator | 2025-05-25 01:07:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:30.843174 | orchestrator | 2025-05-25 01:07:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:30.843200 | orchestrator | 2025-05-25 01:07:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:33.894571 | orchestrator | 2025-05-25 01:07:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:33.896141 | orchestrator | 2025-05-25 01:07:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:33.897566 | orchestrator | 2025-05-25 01:07:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:33.897627 | orchestrator | 2025-05-25 01:07:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:36.941912 | orchestrator | 2025-05-25 01:07:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:36.944102 | orchestrator | 2025-05-25 01:07:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:36.946208 | orchestrator | 2025-05-25 01:07:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:36.946236 | orchestrator | 2025-05-25 01:07:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:39.992517 | orchestrator | 2025-05-25 01:07:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:39.994859 | orchestrator | 2025-05-25 01:07:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:39.996163 | orchestrator | 2025-05-25 01:07:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:39.996209 | orchestrator | 2025-05-25 01:07:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:43.038836 | orchestrator | 2025-05-25 01:07:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:43.041168 | orchestrator | 2025-05-25 01:07:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:43.042643 | orchestrator | 2025-05-25 01:07:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:43.042778 | orchestrator | 2025-05-25 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:46.090418 | orchestrator | 2025-05-25 01:07:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:46.092219 | orchestrator | 2025-05-25 01:07:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:46.093849 | orchestrator | 2025-05-25 01:07:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:46.093898 | orchestrator | 2025-05-25 01:07:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:49.143009 | orchestrator | 2025-05-25 01:07:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:49.144319 | orchestrator | 2025-05-25 01:07:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:49.146101 | orchestrator | 2025-05-25 01:07:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:49.146152 | orchestrator | 2025-05-25 01:07:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:52.190362 | orchestrator | 2025-05-25 01:07:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:52.191661 | orchestrator | 2025-05-25 01:07:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:52.193251 | orchestrator | 2025-05-25 01:07:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:52.193348 | orchestrator | 2025-05-25 01:07:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:55.243475 | orchestrator | 2025-05-25 01:07:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:55.243866 | orchestrator | 2025-05-25 01:07:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:55.244910 | orchestrator | 2025-05-25 01:07:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:55.245016 | orchestrator | 2025-05-25 01:07:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:07:58.288712 | orchestrator | 2025-05-25 01:07:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:07:58.289794 | orchestrator | 2025-05-25 01:07:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:07:58.291636 | orchestrator | 2025-05-25 01:07:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:07:58.291674 | orchestrator | 2025-05-25 01:07:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:01.341871 | orchestrator | 2025-05-25 01:08:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:01.342196 | orchestrator | 2025-05-25 01:08:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:01.343710 | orchestrator | 2025-05-25 01:08:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:01.343752 | orchestrator | 2025-05-25 01:08:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:04.394868 | orchestrator | 2025-05-25 01:08:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:04.396027 | orchestrator | 2025-05-25 01:08:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:04.397768 | orchestrator | 2025-05-25 01:08:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:04.397822 | orchestrator | 2025-05-25 01:08:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:07.446420 | orchestrator | 2025-05-25 01:08:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:07.447565 | orchestrator | 2025-05-25 01:08:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:07.449348 | orchestrator | 2025-05-25 01:08:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:07.449380 | orchestrator | 2025-05-25 01:08:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:10.502138 | orchestrator | 2025-05-25 01:08:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:10.503756 | orchestrator | 2025-05-25 01:08:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:10.505505 | orchestrator | 2025-05-25 01:08:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:10.505533 | orchestrator | 2025-05-25 01:08:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:13.554591 | orchestrator | 2025-05-25 01:08:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:13.556270 | orchestrator | 2025-05-25 01:08:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:13.558245 | orchestrator | 2025-05-25 01:08:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:13.558566 | orchestrator | 2025-05-25 01:08:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:16.606133 | orchestrator | 2025-05-25 01:08:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:16.608106 | orchestrator | 2025-05-25 01:08:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:16.609937 | orchestrator | 2025-05-25 01:08:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:16.610196 | orchestrator | 2025-05-25 01:08:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:19.663588 | orchestrator | 2025-05-25 01:08:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:19.664760 | orchestrator | 2025-05-25 01:08:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:19.666506 | orchestrator | 2025-05-25 01:08:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:19.666537 | orchestrator | 2025-05-25 01:08:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:22.705060 | orchestrator | 2025-05-25 01:08:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:22.707170 | orchestrator | 2025-05-25 01:08:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:22.709294 | orchestrator | 2025-05-25 01:08:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:22.709359 | orchestrator | 2025-05-25 01:08:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:25.762189 | orchestrator | 2025-05-25 01:08:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:25.762534 | orchestrator | 2025-05-25 01:08:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:25.764249 | orchestrator | 2025-05-25 01:08:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:25.764373 | orchestrator | 2025-05-25 01:08:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:28.814733 | orchestrator | 2025-05-25 01:08:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:28.816024 | orchestrator | 2025-05-25 01:08:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:28.818545 | orchestrator | 2025-05-25 01:08:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:28.818652 | orchestrator | 2025-05-25 01:08:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:31.866552 | orchestrator | 2025-05-25 01:08:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:31.868632 | orchestrator | 2025-05-25 01:08:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:31.870531 | orchestrator | 2025-05-25 01:08:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:31.870559 | orchestrator | 2025-05-25 01:08:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:34.924972 | orchestrator | 2025-05-25 01:08:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:34.926531 | orchestrator | 2025-05-25 01:08:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:34.927928 | orchestrator | 2025-05-25 01:08:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:34.928326 | orchestrator | 2025-05-25 01:08:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:37.978396 | orchestrator | 2025-05-25 01:08:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:37.980010 | orchestrator | 2025-05-25 01:08:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:37.981752 | orchestrator | 2025-05-25 01:08:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:37.981823 | orchestrator | 2025-05-25 01:08:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:41.031188 | orchestrator | 2025-05-25 01:08:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:41.032523 | orchestrator | 2025-05-25 01:08:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:41.034327 | orchestrator | 2025-05-25 01:08:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:41.034408 | orchestrator | 2025-05-25 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:44.083157 | orchestrator | 2025-05-25 01:08:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:44.083367 | orchestrator | 2025-05-25 01:08:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:44.083653 | orchestrator | 2025-05-25 01:08:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:44.083669 | orchestrator | 2025-05-25 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:47.127659 | orchestrator | 2025-05-25 01:08:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:47.128798 | orchestrator | 2025-05-25 01:08:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:47.130273 | orchestrator | 2025-05-25 01:08:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:47.130346 | orchestrator | 2025-05-25 01:08:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:50.178264 | orchestrator | 2025-05-25 01:08:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:50.180019 | orchestrator | 2025-05-25 01:08:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:50.181723 | orchestrator | 2025-05-25 01:08:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:50.181942 | orchestrator | 2025-05-25 01:08:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:53.234825 | orchestrator | 2025-05-25 01:08:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:53.236696 | orchestrator | 2025-05-25 01:08:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:53.238214 | orchestrator | 2025-05-25 01:08:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:53.238262 | orchestrator | 2025-05-25 01:08:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:56.287072 | orchestrator | 2025-05-25 01:08:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:56.288841 | orchestrator | 2025-05-25 01:08:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:56.290811 | orchestrator | 2025-05-25 01:08:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:56.290843 | orchestrator | 2025-05-25 01:08:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:08:59.343838 | orchestrator | 2025-05-25 01:08:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:08:59.343972 | orchestrator | 2025-05-25 01:08:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:08:59.345456 | orchestrator | 2025-05-25 01:08:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:08:59.345684 | orchestrator | 2025-05-25 01:08:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:02.397276 | orchestrator | 2025-05-25 01:09:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:02.398809 | orchestrator | 2025-05-25 01:09:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:02.400931 | orchestrator | 2025-05-25 01:09:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:02.400983 | orchestrator | 2025-05-25 01:09:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:05.451066 | orchestrator | 2025-05-25 01:09:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:05.452576 | orchestrator | 2025-05-25 01:09:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:05.455154 | orchestrator | 2025-05-25 01:09:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:05.455374 | orchestrator | 2025-05-25 01:09:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:08.503873 | orchestrator | 2025-05-25 01:09:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:08.505094 | orchestrator | 2025-05-25 01:09:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:08.506897 | orchestrator | 2025-05-25 01:09:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:08.506930 | orchestrator | 2025-05-25 01:09:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:11.553539 | orchestrator | 2025-05-25 01:09:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:11.554916 | orchestrator | 2025-05-25 01:09:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:11.556940 | orchestrator | 2025-05-25 01:09:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:11.556973 | orchestrator | 2025-05-25 01:09:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:14.605296 | orchestrator | 2025-05-25 01:09:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:14.606391 | orchestrator | 2025-05-25 01:09:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:14.608078 | orchestrator | 2025-05-25 01:09:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:14.608177 | orchestrator | 2025-05-25 01:09:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:17.647998 | orchestrator | 2025-05-25 01:09:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:17.649680 | orchestrator | 2025-05-25 01:09:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:17.651666 | orchestrator | 2025-05-25 01:09:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:17.652008 | orchestrator | 2025-05-25 01:09:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:20.701741 | orchestrator | 2025-05-25 01:09:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:20.703639 | orchestrator | 2025-05-25 01:09:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:20.704732 | orchestrator | 2025-05-25 01:09:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:20.704760 | orchestrator | 2025-05-25 01:09:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:23.755398 | orchestrator | 2025-05-25 01:09:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:23.757804 | orchestrator | 2025-05-25 01:09:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:23.759610 | orchestrator | 2025-05-25 01:09:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:23.759652 | orchestrator | 2025-05-25 01:09:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:26.808234 | orchestrator | 2025-05-25 01:09:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:26.808677 | orchestrator | 2025-05-25 01:09:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:26.812416 | orchestrator | 2025-05-25 01:09:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:26.812469 | orchestrator | 2025-05-25 01:09:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:29.867218 | orchestrator | 2025-05-25 01:09:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:29.868118 | orchestrator | 2025-05-25 01:09:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:29.871705 | orchestrator | 2025-05-25 01:09:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:29.871759 | orchestrator | 2025-05-25 01:09:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:32.916958 | orchestrator | 2025-05-25 01:09:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:32.917699 | orchestrator | 2025-05-25 01:09:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:32.918840 | orchestrator | 2025-05-25 01:09:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:32.918887 | orchestrator | 2025-05-25 01:09:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:35.969610 | orchestrator | 2025-05-25 01:09:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:35.971772 | orchestrator | 2025-05-25 01:09:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:35.973910 | orchestrator | 2025-05-25 01:09:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:35.974004 | orchestrator | 2025-05-25 01:09:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:39.026199 | orchestrator | 2025-05-25 01:09:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:39.028059 | orchestrator | 2025-05-25 01:09:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:39.181500 | orchestrator | 2025-05-25 01:09:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:39.181587 | orchestrator | 2025-05-25 01:09:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:42.080913 | orchestrator | 2025-05-25 01:09:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:42.082137 | orchestrator | 2025-05-25 01:09:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:42.083687 | orchestrator | 2025-05-25 01:09:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:42.083715 | orchestrator | 2025-05-25 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:45.129651 | orchestrator | 2025-05-25 01:09:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:45.130711 | orchestrator | 2025-05-25 01:09:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:45.132566 | orchestrator | 2025-05-25 01:09:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:45.132614 | orchestrator | 2025-05-25 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:48.183802 | orchestrator | 2025-05-25 01:09:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:48.184903 | orchestrator | 2025-05-25 01:09:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:48.186682 | orchestrator | 2025-05-25 01:09:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:48.186724 | orchestrator | 2025-05-25 01:09:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:51.235859 | orchestrator | 2025-05-25 01:09:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:51.237226 | orchestrator | 2025-05-25 01:09:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:51.240070 | orchestrator | 2025-05-25 01:09:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:51.240424 | orchestrator | 2025-05-25 01:09:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:54.290871 | orchestrator | 2025-05-25 01:09:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:54.293093 | orchestrator | 2025-05-25 01:09:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:54.295514 | orchestrator | 2025-05-25 01:09:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:54.295605 | orchestrator | 2025-05-25 01:09:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:09:57.339003 | orchestrator | 2025-05-25 01:09:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:09:57.340925 | orchestrator | 2025-05-25 01:09:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:09:57.346279 | orchestrator | 2025-05-25 01:09:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:09:57.346335 | orchestrator | 2025-05-25 01:09:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:00.390622 | orchestrator | 2025-05-25 01:10:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:00.390772 | orchestrator | 2025-05-25 01:10:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:00.393384 | orchestrator | 2025-05-25 01:10:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:00.393486 | orchestrator | 2025-05-25 01:10:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:03.439417 | orchestrator | 2025-05-25 01:10:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:03.440565 | orchestrator | 2025-05-25 01:10:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:03.442395 | orchestrator | 2025-05-25 01:10:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:03.442453 | orchestrator | 2025-05-25 01:10:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:06.495741 | orchestrator | 2025-05-25 01:10:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:06.496556 | orchestrator | 2025-05-25 01:10:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:06.498093 | orchestrator | 2025-05-25 01:10:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:06.498164 | orchestrator | 2025-05-25 01:10:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:09.547157 | orchestrator | 2025-05-25 01:10:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:09.548688 | orchestrator | 2025-05-25 01:10:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:09.551217 | orchestrator | 2025-05-25 01:10:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:09.551310 | orchestrator | 2025-05-25 01:10:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:12.604937 | orchestrator | 2025-05-25 01:10:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:12.606243 | orchestrator | 2025-05-25 01:10:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:12.608010 | orchestrator | 2025-05-25 01:10:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:12.608042 | orchestrator | 2025-05-25 01:10:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:15.654601 | orchestrator | 2025-05-25 01:10:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:15.656717 | orchestrator | 2025-05-25 01:10:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:15.659084 | orchestrator | 2025-05-25 01:10:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:15.659134 | orchestrator | 2025-05-25 01:10:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:18.711734 | orchestrator | 2025-05-25 01:10:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:18.712534 | orchestrator | 2025-05-25 01:10:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:18.714371 | orchestrator | 2025-05-25 01:10:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:18.714405 | orchestrator | 2025-05-25 01:10:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:21.765999 | orchestrator | 2025-05-25 01:10:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:21.766938 | orchestrator | 2025-05-25 01:10:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:21.768563 | orchestrator | 2025-05-25 01:10:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:21.768677 | orchestrator | 2025-05-25 01:10:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:24.814797 | orchestrator | 2025-05-25 01:10:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:24.815206 | orchestrator | 2025-05-25 01:10:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:24.816689 | orchestrator | 2025-05-25 01:10:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:24.816744 | orchestrator | 2025-05-25 01:10:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:27.865524 | orchestrator | 2025-05-25 01:10:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:27.866002 | orchestrator | 2025-05-25 01:10:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:27.868524 | orchestrator | 2025-05-25 01:10:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:27.868862 | orchestrator | 2025-05-25 01:10:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:30.918121 | orchestrator | 2025-05-25 01:10:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:30.918781 | orchestrator | 2025-05-25 01:10:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:30.920634 | orchestrator | 2025-05-25 01:10:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:30.920681 | orchestrator | 2025-05-25 01:10:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:33.966962 | orchestrator | 2025-05-25 01:10:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:33.968233 | orchestrator | 2025-05-25 01:10:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:33.969502 | orchestrator | 2025-05-25 01:10:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:33.969523 | orchestrator | 2025-05-25 01:10:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:37.021803 | orchestrator | 2025-05-25 01:10:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:37.023182 | orchestrator | 2025-05-25 01:10:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:37.025058 | orchestrator | 2025-05-25 01:10:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:37.025143 | orchestrator | 2025-05-25 01:10:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:40.069467 | orchestrator | 2025-05-25 01:10:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:40.070114 | orchestrator | 2025-05-25 01:10:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:40.071289 | orchestrator | 2025-05-25 01:10:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:40.071319 | orchestrator | 2025-05-25 01:10:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:43.125663 | orchestrator | 2025-05-25 01:10:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:43.127109 | orchestrator | 2025-05-25 01:10:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:43.129776 | orchestrator | 2025-05-25 01:10:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:43.129806 | orchestrator | 2025-05-25 01:10:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:46.183235 | orchestrator | 2025-05-25 01:10:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:46.184664 | orchestrator | 2025-05-25 01:10:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:46.186162 | orchestrator | 2025-05-25 01:10:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:46.186241 | orchestrator | 2025-05-25 01:10:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:49.238678 | orchestrator | 2025-05-25 01:10:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:49.239803 | orchestrator | 2025-05-25 01:10:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:49.241441 | orchestrator | 2025-05-25 01:10:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:49.241464 | orchestrator | 2025-05-25 01:10:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:52.288889 | orchestrator | 2025-05-25 01:10:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:52.290321 | orchestrator | 2025-05-25 01:10:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:52.293093 | orchestrator | 2025-05-25 01:10:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:52.293168 | orchestrator | 2025-05-25 01:10:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:55.347276 | orchestrator | 2025-05-25 01:10:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:55.350105 | orchestrator | 2025-05-25 01:10:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:55.351308 | orchestrator | 2025-05-25 01:10:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:55.351416 | orchestrator | 2025-05-25 01:10:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:10:58.403096 | orchestrator | 2025-05-25 01:10:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:10:58.406456 | orchestrator | 2025-05-25 01:10:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:10:58.407892 | orchestrator | 2025-05-25 01:10:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:10:58.407920 | orchestrator | 2025-05-25 01:10:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:01.456440 | orchestrator | 2025-05-25 01:11:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:01.457467 | orchestrator | 2025-05-25 01:11:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:01.460265 | orchestrator | 2025-05-25 01:11:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:01.460296 | orchestrator | 2025-05-25 01:11:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:04.502303 | orchestrator | 2025-05-25 01:11:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:04.503664 | orchestrator | 2025-05-25 01:11:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:04.505509 | orchestrator | 2025-05-25 01:11:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:04.505560 | orchestrator | 2025-05-25 01:11:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:07.553761 | orchestrator | 2025-05-25 01:11:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:07.554642 | orchestrator | 2025-05-25 01:11:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:07.556690 | orchestrator | 2025-05-25 01:11:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:07.556754 | orchestrator | 2025-05-25 01:11:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:10.603782 | orchestrator | 2025-05-25 01:11:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:10.607102 | orchestrator | 2025-05-25 01:11:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:10.611950 | orchestrator | 2025-05-25 01:11:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:10.612020 | orchestrator | 2025-05-25 01:11:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:13.653615 | orchestrator | 2025-05-25 01:11:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:13.654329 | orchestrator | 2025-05-25 01:11:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:13.658141 | orchestrator | 2025-05-25 01:11:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:13.658197 | orchestrator | 2025-05-25 01:11:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:16.708667 | orchestrator | 2025-05-25 01:11:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:16.709991 | orchestrator | 2025-05-25 01:11:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:16.711790 | orchestrator | 2025-05-25 01:11:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:16.711839 | orchestrator | 2025-05-25 01:11:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:19.762840 | orchestrator | 2025-05-25 01:11:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:19.764076 | orchestrator | 2025-05-25 01:11:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:19.765556 | orchestrator | 2025-05-25 01:11:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:19.765634 | orchestrator | 2025-05-25 01:11:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:22.809863 | orchestrator | 2025-05-25 01:11:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:22.810867 | orchestrator | 2025-05-25 01:11:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:22.812110 | orchestrator | 2025-05-25 01:11:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:22.812134 | orchestrator | 2025-05-25 01:11:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:25.866355 | orchestrator | 2025-05-25 01:11:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:25.867027 | orchestrator | 2025-05-25 01:11:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:25.868359 | orchestrator | 2025-05-25 01:11:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:25.868388 | orchestrator | 2025-05-25 01:11:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:28.919019 | orchestrator | 2025-05-25 01:11:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:28.919840 | orchestrator | 2025-05-25 01:11:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:28.920864 | orchestrator | 2025-05-25 01:11:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:28.921174 | orchestrator | 2025-05-25 01:11:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:31.971354 | orchestrator | 2025-05-25 01:11:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:31.972710 | orchestrator | 2025-05-25 01:11:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:31.974278 | orchestrator | 2025-05-25 01:11:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:31.974326 | orchestrator | 2025-05-25 01:11:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:35.028091 | orchestrator | 2025-05-25 01:11:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:35.029135 | orchestrator | 2025-05-25 01:11:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:35.030768 | orchestrator | 2025-05-25 01:11:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:35.030798 | orchestrator | 2025-05-25 01:11:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:38.079917 | orchestrator | 2025-05-25 01:11:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:38.081483 | orchestrator | 2025-05-25 01:11:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:38.082887 | orchestrator | 2025-05-25 01:11:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:38.082911 | orchestrator | 2025-05-25 01:11:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:41.130904 | orchestrator | 2025-05-25 01:11:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:41.132287 | orchestrator | 2025-05-25 01:11:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:41.134151 | orchestrator | 2025-05-25 01:11:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:41.134337 | orchestrator | 2025-05-25 01:11:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:44.185036 | orchestrator | 2025-05-25 01:11:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:44.185854 | orchestrator | 2025-05-25 01:11:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:44.187571 | orchestrator | 2025-05-25 01:11:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:44.187617 | orchestrator | 2025-05-25 01:11:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:47.237363 | orchestrator | 2025-05-25 01:11:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:47.238192 | orchestrator | 2025-05-25 01:11:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:47.240879 | orchestrator | 2025-05-25 01:11:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:47.240996 | orchestrator | 2025-05-25 01:11:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:50.292916 | orchestrator | 2025-05-25 01:11:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:50.293142 | orchestrator | 2025-05-25 01:11:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:50.294568 | orchestrator | 2025-05-25 01:11:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:50.294601 | orchestrator | 2025-05-25 01:11:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:53.338937 | orchestrator | 2025-05-25 01:11:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:53.340406 | orchestrator | 2025-05-25 01:11:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:53.341594 | orchestrator | 2025-05-25 01:11:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:53.341636 | orchestrator | 2025-05-25 01:11:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:56.391778 | orchestrator | 2025-05-25 01:11:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:56.392898 | orchestrator | 2025-05-25 01:11:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:56.394503 | orchestrator | 2025-05-25 01:11:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:56.394643 | orchestrator | 2025-05-25 01:11:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:11:59.441915 | orchestrator | 2025-05-25 01:11:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:11:59.442495 | orchestrator | 2025-05-25 01:11:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:11:59.444219 | orchestrator | 2025-05-25 01:11:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:11:59.444247 | orchestrator | 2025-05-25 01:11:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:02.485101 | orchestrator | 2025-05-25 01:12:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:02.486686 | orchestrator | 2025-05-25 01:12:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:02.488540 | orchestrator | 2025-05-25 01:12:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:02.488609 | orchestrator | 2025-05-25 01:12:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:05.538610 | orchestrator | 2025-05-25 01:12:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:05.540265 | orchestrator | 2025-05-25 01:12:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:05.542254 | orchestrator | 2025-05-25 01:12:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:05.542288 | orchestrator | 2025-05-25 01:12:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:08.593525 | orchestrator | 2025-05-25 01:12:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:08.595342 | orchestrator | 2025-05-25 01:12:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:08.597075 | orchestrator | 2025-05-25 01:12:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:08.597111 | orchestrator | 2025-05-25 01:12:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:11.642567 | orchestrator | 2025-05-25 01:12:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:11.645701 | orchestrator | 2025-05-25 01:12:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:11.647870 | orchestrator | 2025-05-25 01:12:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:11.647944 | orchestrator | 2025-05-25 01:12:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:14.696027 | orchestrator | 2025-05-25 01:12:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:14.698186 | orchestrator | 2025-05-25 01:12:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:14.699612 | orchestrator | 2025-05-25 01:12:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:14.699672 | orchestrator | 2025-05-25 01:12:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:17.755342 | orchestrator | 2025-05-25 01:12:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:17.756574 | orchestrator | 2025-05-25 01:12:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:17.757376 | orchestrator | 2025-05-25 01:12:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:17.757878 | orchestrator | 2025-05-25 01:12:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:20.806978 | orchestrator | 2025-05-25 01:12:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:20.809553 | orchestrator | 2025-05-25 01:12:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:20.811417 | orchestrator | 2025-05-25 01:12:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:20.811491 | orchestrator | 2025-05-25 01:12:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:23.862273 | orchestrator | 2025-05-25 01:12:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:23.863737 | orchestrator | 2025-05-25 01:12:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:23.865830 | orchestrator | 2025-05-25 01:12:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:23.865857 | orchestrator | 2025-05-25 01:12:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:26.918973 | orchestrator | 2025-05-25 01:12:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:26.919228 | orchestrator | 2025-05-25 01:12:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:26.921266 | orchestrator | 2025-05-25 01:12:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:26.921277 | orchestrator | 2025-05-25 01:12:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:29.978753 | orchestrator | 2025-05-25 01:12:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:29.981973 | orchestrator | 2025-05-25 01:12:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:29.983088 | orchestrator | 2025-05-25 01:12:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:29.983126 | orchestrator | 2025-05-25 01:12:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:33.035581 | orchestrator | 2025-05-25 01:12:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:33.038356 | orchestrator | 2025-05-25 01:12:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:33.040542 | orchestrator | 2025-05-25 01:12:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:33.040633 | orchestrator | 2025-05-25 01:12:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:36.093500 | orchestrator | 2025-05-25 01:12:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:36.094926 | orchestrator | 2025-05-25 01:12:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:36.096127 | orchestrator | 2025-05-25 01:12:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:36.096173 | orchestrator | 2025-05-25 01:12:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:39.148082 | orchestrator | 2025-05-25 01:12:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:39.149259 | orchestrator | 2025-05-25 01:12:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:39.150336 | orchestrator | 2025-05-25 01:12:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:39.150421 | orchestrator | 2025-05-25 01:12:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:42.201642 | orchestrator | 2025-05-25 01:12:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:42.202779 | orchestrator | 2025-05-25 01:12:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:42.203885 | orchestrator | 2025-05-25 01:12:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:42.203997 | orchestrator | 2025-05-25 01:12:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:45.266681 | orchestrator | 2025-05-25 01:12:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:45.267446 | orchestrator | 2025-05-25 01:12:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:45.269017 | orchestrator | 2025-05-25 01:12:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:45.269112 | orchestrator | 2025-05-25 01:12:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:48.314188 | orchestrator | 2025-05-25 01:12:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:48.314984 | orchestrator | 2025-05-25 01:12:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:48.319071 | orchestrator | 2025-05-25 01:12:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:48.319124 | orchestrator | 2025-05-25 01:12:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:51.369812 | orchestrator | 2025-05-25 01:12:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:51.371295 | orchestrator | 2025-05-25 01:12:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:51.374675 | orchestrator | 2025-05-25 01:12:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:51.374773 | orchestrator | 2025-05-25 01:12:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:54.412633 | orchestrator | 2025-05-25 01:12:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:54.414084 | orchestrator | 2025-05-25 01:12:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:54.415047 | orchestrator | 2025-05-25 01:12:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:54.415083 | orchestrator | 2025-05-25 01:12:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:12:57.463965 | orchestrator | 2025-05-25 01:12:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:12:57.465423 | orchestrator | 2025-05-25 01:12:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:12:57.467429 | orchestrator | 2025-05-25 01:12:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:12:57.467503 | orchestrator | 2025-05-25 01:12:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:00.523723 | orchestrator | 2025-05-25 01:13:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:00.524713 | orchestrator | 2025-05-25 01:13:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:00.526127 | orchestrator | 2025-05-25 01:13:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:00.526156 | orchestrator | 2025-05-25 01:13:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:03.575363 | orchestrator | 2025-05-25 01:13:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:03.577370 | orchestrator | 2025-05-25 01:13:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:03.579465 | orchestrator | 2025-05-25 01:13:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:03.579539 | orchestrator | 2025-05-25 01:13:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:06.628683 | orchestrator | 2025-05-25 01:13:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:06.629934 | orchestrator | 2025-05-25 01:13:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:06.630893 | orchestrator | 2025-05-25 01:13:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:06.631064 | orchestrator | 2025-05-25 01:13:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:09.684311 | orchestrator | 2025-05-25 01:13:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:09.686766 | orchestrator | 2025-05-25 01:13:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:09.688952 | orchestrator | 2025-05-25 01:13:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:09.688984 | orchestrator | 2025-05-25 01:13:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:12.737852 | orchestrator | 2025-05-25 01:13:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:12.739285 | orchestrator | 2025-05-25 01:13:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:12.740590 | orchestrator | 2025-05-25 01:13:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:12.740616 | orchestrator | 2025-05-25 01:13:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:15.777582 | orchestrator | 2025-05-25 01:13:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:15.778437 | orchestrator | 2025-05-25 01:13:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:15.781041 | orchestrator | 2025-05-25 01:13:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:15.781078 | orchestrator | 2025-05-25 01:13:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:18.831244 | orchestrator | 2025-05-25 01:13:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:18.833492 | orchestrator | 2025-05-25 01:13:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:18.835226 | orchestrator | 2025-05-25 01:13:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:18.835330 | orchestrator | 2025-05-25 01:13:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:21.889938 | orchestrator | 2025-05-25 01:13:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:21.891198 | orchestrator | 2025-05-25 01:13:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:21.892878 | orchestrator | 2025-05-25 01:13:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:21.893103 | orchestrator | 2025-05-25 01:13:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:24.942426 | orchestrator | 2025-05-25 01:13:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:24.943437 | orchestrator | 2025-05-25 01:13:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:24.945166 | orchestrator | 2025-05-25 01:13:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:24.945201 | orchestrator | 2025-05-25 01:13:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:27.995203 | orchestrator | 2025-05-25 01:13:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:27.996611 | orchestrator | 2025-05-25 01:13:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:27.998238 | orchestrator | 2025-05-25 01:13:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:27.998412 | orchestrator | 2025-05-25 01:13:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:31.047090 | orchestrator | 2025-05-25 01:13:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:31.048753 | orchestrator | 2025-05-25 01:13:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:31.050829 | orchestrator | 2025-05-25 01:13:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:31.050964 | orchestrator | 2025-05-25 01:13:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:34.102197 | orchestrator | 2025-05-25 01:13:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:34.103185 | orchestrator | 2025-05-25 01:13:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:34.104790 | orchestrator | 2025-05-25 01:13:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:34.104819 | orchestrator | 2025-05-25 01:13:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:37.155713 | orchestrator | 2025-05-25 01:13:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:37.157218 | orchestrator | 2025-05-25 01:13:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:37.159538 | orchestrator | 2025-05-25 01:13:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:37.159578 | orchestrator | 2025-05-25 01:13:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:40.210906 | orchestrator | 2025-05-25 01:13:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:40.212386 | orchestrator | 2025-05-25 01:13:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:40.214182 | orchestrator | 2025-05-25 01:13:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:40.214214 | orchestrator | 2025-05-25 01:13:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:43.266111 | orchestrator | 2025-05-25 01:13:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:43.268383 | orchestrator | 2025-05-25 01:13:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:43.270717 | orchestrator | 2025-05-25 01:13:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:43.270761 | orchestrator | 2025-05-25 01:13:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:46.323295 | orchestrator | 2025-05-25 01:13:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:46.324459 | orchestrator | 2025-05-25 01:13:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:46.326179 | orchestrator | 2025-05-25 01:13:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:46.326228 | orchestrator | 2025-05-25 01:13:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:49.377021 | orchestrator | 2025-05-25 01:13:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:49.378380 | orchestrator | 2025-05-25 01:13:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:49.379803 | orchestrator | 2025-05-25 01:13:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:49.379834 | orchestrator | 2025-05-25 01:13:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:52.433828 | orchestrator | 2025-05-25 01:13:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:52.435178 | orchestrator | 2025-05-25 01:13:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:52.437689 | orchestrator | 2025-05-25 01:13:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:52.437724 | orchestrator | 2025-05-25 01:13:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:55.485325 | orchestrator | 2025-05-25 01:13:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:55.486842 | orchestrator | 2025-05-25 01:13:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:55.488623 | orchestrator | 2025-05-25 01:13:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:55.488671 | orchestrator | 2025-05-25 01:13:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:13:58.535032 | orchestrator | 2025-05-25 01:13:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:13:58.537916 | orchestrator | 2025-05-25 01:13:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:13:58.539325 | orchestrator | 2025-05-25 01:13:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:13:58.539361 | orchestrator | 2025-05-25 01:13:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:01.591218 | orchestrator | 2025-05-25 01:14:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:01.591382 | orchestrator | 2025-05-25 01:14:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:01.592973 | orchestrator | 2025-05-25 01:14:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:01.593000 | orchestrator | 2025-05-25 01:14:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:04.643121 | orchestrator | 2025-05-25 01:14:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:04.644965 | orchestrator | 2025-05-25 01:14:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:04.646761 | orchestrator | 2025-05-25 01:14:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:04.646814 | orchestrator | 2025-05-25 01:14:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:07.689594 | orchestrator | 2025-05-25 01:14:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:07.690479 | orchestrator | 2025-05-25 01:14:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:07.692143 | orchestrator | 2025-05-25 01:14:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:07.692166 | orchestrator | 2025-05-25 01:14:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:10.734983 | orchestrator | 2025-05-25 01:14:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:10.737592 | orchestrator | 2025-05-25 01:14:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:10.739373 | orchestrator | 2025-05-25 01:14:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:10.739408 | orchestrator | 2025-05-25 01:14:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:13.789320 | orchestrator | 2025-05-25 01:14:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:13.790668 | orchestrator | 2025-05-25 01:14:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:13.792134 | orchestrator | 2025-05-25 01:14:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:13.792180 | orchestrator | 2025-05-25 01:14:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:16.842235 | orchestrator | 2025-05-25 01:14:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:16.843171 | orchestrator | 2025-05-25 01:14:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:16.844847 | orchestrator | 2025-05-25 01:14:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:16.844872 | orchestrator | 2025-05-25 01:14:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:19.898963 | orchestrator | 2025-05-25 01:14:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:19.900722 | orchestrator | 2025-05-25 01:14:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:19.902902 | orchestrator | 2025-05-25 01:14:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:19.902939 | orchestrator | 2025-05-25 01:14:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:22.952674 | orchestrator | 2025-05-25 01:14:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:22.954869 | orchestrator | 2025-05-25 01:14:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:22.957037 | orchestrator | 2025-05-25 01:14:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:22.957121 | orchestrator | 2025-05-25 01:14:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:26.009692 | orchestrator | 2025-05-25 01:14:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:26.010937 | orchestrator | 2025-05-25 01:14:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:26.012826 | orchestrator | 2025-05-25 01:14:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:26.012870 | orchestrator | 2025-05-25 01:14:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:29.061204 | orchestrator | 2025-05-25 01:14:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:29.064807 | orchestrator | 2025-05-25 01:14:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:29.066742 | orchestrator | 2025-05-25 01:14:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:29.066767 | orchestrator | 2025-05-25 01:14:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:32.116174 | orchestrator | 2025-05-25 01:14:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:32.118201 | orchestrator | 2025-05-25 01:14:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:32.120359 | orchestrator | 2025-05-25 01:14:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:32.120385 | orchestrator | 2025-05-25 01:14:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:35.172702 | orchestrator | 2025-05-25 01:14:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:35.174280 | orchestrator | 2025-05-25 01:14:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:35.176689 | orchestrator | 2025-05-25 01:14:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:35.176789 | orchestrator | 2025-05-25 01:14:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:38.224227 | orchestrator | 2025-05-25 01:14:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:38.224739 | orchestrator | 2025-05-25 01:14:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:38.226164 | orchestrator | 2025-05-25 01:14:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:38.226272 | orchestrator | 2025-05-25 01:14:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:41.272008 | orchestrator | 2025-05-25 01:14:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:41.272942 | orchestrator | 2025-05-25 01:14:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:41.274661 | orchestrator | 2025-05-25 01:14:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:41.274694 | orchestrator | 2025-05-25 01:14:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:44.325701 | orchestrator | 2025-05-25 01:14:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:44.326447 | orchestrator | 2025-05-25 01:14:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:44.328010 | orchestrator | 2025-05-25 01:14:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:44.328065 | orchestrator | 2025-05-25 01:14:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:47.378012 | orchestrator | 2025-05-25 01:14:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:47.378925 | orchestrator | 2025-05-25 01:14:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:47.380347 | orchestrator | 2025-05-25 01:14:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:47.380407 | orchestrator | 2025-05-25 01:14:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:50.432953 | orchestrator | 2025-05-25 01:14:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:50.433234 | orchestrator | 2025-05-25 01:14:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:50.436475 | orchestrator | 2025-05-25 01:14:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:50.436523 | orchestrator | 2025-05-25 01:14:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:53.482489 | orchestrator | 2025-05-25 01:14:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:53.483390 | orchestrator | 2025-05-25 01:14:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:53.484450 | orchestrator | 2025-05-25 01:14:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:53.484477 | orchestrator | 2025-05-25 01:14:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:56.528793 | orchestrator | 2025-05-25 01:14:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:56.529857 | orchestrator | 2025-05-25 01:14:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:56.531318 | orchestrator | 2025-05-25 01:14:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:56.531467 | orchestrator | 2025-05-25 01:14:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:14:59.581778 | orchestrator | 2025-05-25 01:14:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:14:59.582159 | orchestrator | 2025-05-25 01:14:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:14:59.583603 | orchestrator | 2025-05-25 01:14:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:14:59.583843 | orchestrator | 2025-05-25 01:14:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:02.634089 | orchestrator | 2025-05-25 01:15:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:02.634699 | orchestrator | 2025-05-25 01:15:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:02.635958 | orchestrator | 2025-05-25 01:15:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:02.635989 | orchestrator | 2025-05-25 01:15:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:05.690746 | orchestrator | 2025-05-25 01:15:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:05.692908 | orchestrator | 2025-05-25 01:15:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:05.694957 | orchestrator | 2025-05-25 01:15:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:05.695067 | orchestrator | 2025-05-25 01:15:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:08.747373 | orchestrator | 2025-05-25 01:15:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:08.749234 | orchestrator | 2025-05-25 01:15:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:08.750911 | orchestrator | 2025-05-25 01:15:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:08.750949 | orchestrator | 2025-05-25 01:15:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:11.800538 | orchestrator | 2025-05-25 01:15:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:11.802655 | orchestrator | 2025-05-25 01:15:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:11.804115 | orchestrator | 2025-05-25 01:15:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:11.804139 | orchestrator | 2025-05-25 01:15:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:14.850654 | orchestrator | 2025-05-25 01:15:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:14.852346 | orchestrator | 2025-05-25 01:15:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:14.853984 | orchestrator | 2025-05-25 01:15:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:14.854071 | orchestrator | 2025-05-25 01:15:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:17.904433 | orchestrator | 2025-05-25 01:15:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:17.905935 | orchestrator | 2025-05-25 01:15:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:17.907530 | orchestrator | 2025-05-25 01:15:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:17.907557 | orchestrator | 2025-05-25 01:15:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:20.958931 | orchestrator | 2025-05-25 01:15:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:20.959869 | orchestrator | 2025-05-25 01:15:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:20.961110 | orchestrator | 2025-05-25 01:15:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:20.961137 | orchestrator | 2025-05-25 01:15:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:24.005232 | orchestrator | 2025-05-25 01:15:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:24.005837 | orchestrator | 2025-05-25 01:15:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:24.006861 | orchestrator | 2025-05-25 01:15:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:24.006897 | orchestrator | 2025-05-25 01:15:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:27.049269 | orchestrator | 2025-05-25 01:15:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:27.050320 | orchestrator | 2025-05-25 01:15:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:27.052508 | orchestrator | 2025-05-25 01:15:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:27.052543 | orchestrator | 2025-05-25 01:15:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:30.093785 | orchestrator | 2025-05-25 01:15:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:30.095132 | orchestrator | 2025-05-25 01:15:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:30.096811 | orchestrator | 2025-05-25 01:15:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:30.096861 | orchestrator | 2025-05-25 01:15:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:33.151147 | orchestrator | 2025-05-25 01:15:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:33.153671 | orchestrator | 2025-05-25 01:15:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:33.155653 | orchestrator | 2025-05-25 01:15:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:33.156007 | orchestrator | 2025-05-25 01:15:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:36.208793 | orchestrator | 2025-05-25 01:15:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:36.210281 | orchestrator | 2025-05-25 01:15:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:36.212312 | orchestrator | 2025-05-25 01:15:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:36.212347 | orchestrator | 2025-05-25 01:15:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:39.267470 | orchestrator | 2025-05-25 01:15:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:39.268569 | orchestrator | 2025-05-25 01:15:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:39.270803 | orchestrator | 2025-05-25 01:15:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:39.270837 | orchestrator | 2025-05-25 01:15:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:42.315986 | orchestrator | 2025-05-25 01:15:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:42.319135 | orchestrator | 2025-05-25 01:15:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:42.320367 | orchestrator | 2025-05-25 01:15:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:42.320446 | orchestrator | 2025-05-25 01:15:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:45.369543 | orchestrator | 2025-05-25 01:15:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:45.370505 | orchestrator | 2025-05-25 01:15:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:45.371376 | orchestrator | 2025-05-25 01:15:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:45.371395 | orchestrator | 2025-05-25 01:15:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:48.421505 | orchestrator | 2025-05-25 01:15:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:48.422228 | orchestrator | 2025-05-25 01:15:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:48.425244 | orchestrator | 2025-05-25 01:15:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:48.425269 | orchestrator | 2025-05-25 01:15:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:51.478754 | orchestrator | 2025-05-25 01:15:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:51.481904 | orchestrator | 2025-05-25 01:15:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:51.483433 | orchestrator | 2025-05-25 01:15:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:51.484703 | orchestrator | 2025-05-25 01:15:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:54.536667 | orchestrator | 2025-05-25 01:15:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:54.539373 | orchestrator | 2025-05-25 01:15:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:54.540449 | orchestrator | 2025-05-25 01:15:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:54.540476 | orchestrator | 2025-05-25 01:15:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:15:57.605982 | orchestrator | 2025-05-25 01:15:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:15:57.607889 | orchestrator | 2025-05-25 01:15:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:15:57.608683 | orchestrator | 2025-05-25 01:15:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:15:57.611316 | orchestrator | 2025-05-25 01:15:57 | INFO  | Task 2bb7ee71-ae7c-4210-8232-852a7827d136 is in state STARTED 2025-05-25 01:15:57.611425 | orchestrator | 2025-05-25 01:15:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:00.670457 | orchestrator | 2025-05-25 01:16:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:00.671452 | orchestrator | 2025-05-25 01:16:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:00.673449 | orchestrator | 2025-05-25 01:16:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:00.675360 | orchestrator | 2025-05-25 01:16:00 | INFO  | Task 2bb7ee71-ae7c-4210-8232-852a7827d136 is in state STARTED 2025-05-25 01:16:00.675415 | orchestrator | 2025-05-25 01:16:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:03.727053 | orchestrator | 2025-05-25 01:16:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:03.729181 | orchestrator | 2025-05-25 01:16:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:03.732018 | orchestrator | 2025-05-25 01:16:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:03.734530 | orchestrator | 2025-05-25 01:16:03 | INFO  | Task 2bb7ee71-ae7c-4210-8232-852a7827d136 is in state STARTED 2025-05-25 01:16:03.734562 | orchestrator | 2025-05-25 01:16:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:06.782292 | orchestrator | 2025-05-25 01:16:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:06.783218 | orchestrator | 2025-05-25 01:16:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:06.784827 | orchestrator | 2025-05-25 01:16:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:06.785831 | orchestrator | 2025-05-25 01:16:06 | INFO  | Task 2bb7ee71-ae7c-4210-8232-852a7827d136 is in state SUCCESS 2025-05-25 01:16:06.785875 | orchestrator | 2025-05-25 01:16:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:09.837033 | orchestrator | 2025-05-25 01:16:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:09.838453 | orchestrator | 2025-05-25 01:16:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:09.840037 | orchestrator | 2025-05-25 01:16:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:09.840070 | orchestrator | 2025-05-25 01:16:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:12.891502 | orchestrator | 2025-05-25 01:16:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:12.892873 | orchestrator | 2025-05-25 01:16:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:12.894356 | orchestrator | 2025-05-25 01:16:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:12.894390 | orchestrator | 2025-05-25 01:16:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:15.947059 | orchestrator | 2025-05-25 01:16:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:15.949253 | orchestrator | 2025-05-25 01:16:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:15.951142 | orchestrator | 2025-05-25 01:16:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:15.951186 | orchestrator | 2025-05-25 01:16:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:19.000677 | orchestrator | 2025-05-25 01:16:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:19.002878 | orchestrator | 2025-05-25 01:16:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:19.004520 | orchestrator | 2025-05-25 01:16:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:19.004558 | orchestrator | 2025-05-25 01:16:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:22.058251 | orchestrator | 2025-05-25 01:16:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:22.059847 | orchestrator | 2025-05-25 01:16:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:22.061272 | orchestrator | 2025-05-25 01:16:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:22.061404 | orchestrator | 2025-05-25 01:16:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:25.116123 | orchestrator | 2025-05-25 01:16:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:25.116320 | orchestrator | 2025-05-25 01:16:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:25.117708 | orchestrator | 2025-05-25 01:16:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:25.117983 | orchestrator | 2025-05-25 01:16:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:28.167201 | orchestrator | 2025-05-25 01:16:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:28.168214 | orchestrator | 2025-05-25 01:16:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:28.169853 | orchestrator | 2025-05-25 01:16:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:28.169888 | orchestrator | 2025-05-25 01:16:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:31.215165 | orchestrator | 2025-05-25 01:16:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:31.216885 | orchestrator | 2025-05-25 01:16:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:31.219450 | orchestrator | 2025-05-25 01:16:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:31.219762 | orchestrator | 2025-05-25 01:16:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:34.272762 | orchestrator | 2025-05-25 01:16:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:34.276124 | orchestrator | 2025-05-25 01:16:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:34.279597 | orchestrator | 2025-05-25 01:16:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:34.279677 | orchestrator | 2025-05-25 01:16:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:37.332350 | orchestrator | 2025-05-25 01:16:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:37.335100 | orchestrator | 2025-05-25 01:16:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:37.339160 | orchestrator | 2025-05-25 01:16:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:37.339186 | orchestrator | 2025-05-25 01:16:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:40.387256 | orchestrator | 2025-05-25 01:16:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:40.388677 | orchestrator | 2025-05-25 01:16:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:40.390115 | orchestrator | 2025-05-25 01:16:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:40.390149 | orchestrator | 2025-05-25 01:16:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:43.442352 | orchestrator | 2025-05-25 01:16:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:43.443357 | orchestrator | 2025-05-25 01:16:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:43.444726 | orchestrator | 2025-05-25 01:16:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:43.444753 | orchestrator | 2025-05-25 01:16:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:46.496702 | orchestrator | 2025-05-25 01:16:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:46.499102 | orchestrator | 2025-05-25 01:16:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:46.501190 | orchestrator | 2025-05-25 01:16:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:46.501217 | orchestrator | 2025-05-25 01:16:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:49.545604 | orchestrator | 2025-05-25 01:16:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:49.546914 | orchestrator | 2025-05-25 01:16:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:49.549097 | orchestrator | 2025-05-25 01:16:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:49.549299 | orchestrator | 2025-05-25 01:16:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:52.605007 | orchestrator | 2025-05-25 01:16:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:52.605884 | orchestrator | 2025-05-25 01:16:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:52.607096 | orchestrator | 2025-05-25 01:16:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:52.607127 | orchestrator | 2025-05-25 01:16:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:55.660691 | orchestrator | 2025-05-25 01:16:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:55.662674 | orchestrator | 2025-05-25 01:16:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:55.664925 | orchestrator | 2025-05-25 01:16:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:55.665103 | orchestrator | 2025-05-25 01:16:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:16:58.715865 | orchestrator | 2025-05-25 01:16:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:16:58.716681 | orchestrator | 2025-05-25 01:16:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:16:58.717469 | orchestrator | 2025-05-25 01:16:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:16:58.717497 | orchestrator | 2025-05-25 01:16:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:01.773184 | orchestrator | 2025-05-25 01:17:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:01.774188 | orchestrator | 2025-05-25 01:17:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:01.775672 | orchestrator | 2025-05-25 01:17:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:01.775706 | orchestrator | 2025-05-25 01:17:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:04.826090 | orchestrator | 2025-05-25 01:17:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:04.832867 | orchestrator | 2025-05-25 01:17:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:04.832937 | orchestrator | 2025-05-25 01:17:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:04.832948 | orchestrator | 2025-05-25 01:17:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:07.875745 | orchestrator | 2025-05-25 01:17:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:07.876722 | orchestrator | 2025-05-25 01:17:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:07.878773 | orchestrator | 2025-05-25 01:17:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:07.878809 | orchestrator | 2025-05-25 01:17:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:10.928145 | orchestrator | 2025-05-25 01:17:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:10.929669 | orchestrator | 2025-05-25 01:17:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:10.931878 | orchestrator | 2025-05-25 01:17:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:10.931988 | orchestrator | 2025-05-25 01:17:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:13.977709 | orchestrator | 2025-05-25 01:17:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:13.979049 | orchestrator | 2025-05-25 01:17:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:13.980829 | orchestrator | 2025-05-25 01:17:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:13.980926 | orchestrator | 2025-05-25 01:17:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:17.028043 | orchestrator | 2025-05-25 01:17:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:17.030343 | orchestrator | 2025-05-25 01:17:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:17.033187 | orchestrator | 2025-05-25 01:17:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:17.033257 | orchestrator | 2025-05-25 01:17:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:20.080953 | orchestrator | 2025-05-25 01:17:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:20.081620 | orchestrator | 2025-05-25 01:17:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:20.082408 | orchestrator | 2025-05-25 01:17:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:20.082419 | orchestrator | 2025-05-25 01:17:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:23.131574 | orchestrator | 2025-05-25 01:17:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:23.133292 | orchestrator | 2025-05-25 01:17:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:23.134976 | orchestrator | 2025-05-25 01:17:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:23.135027 | orchestrator | 2025-05-25 01:17:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:26.185159 | orchestrator | 2025-05-25 01:17:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:26.185284 | orchestrator | 2025-05-25 01:17:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:26.185371 | orchestrator | 2025-05-25 01:17:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:26.185388 | orchestrator | 2025-05-25 01:17:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:29.230709 | orchestrator | 2025-05-25 01:17:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:29.232519 | orchestrator | 2025-05-25 01:17:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:29.234569 | orchestrator | 2025-05-25 01:17:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:29.234597 | orchestrator | 2025-05-25 01:17:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:32.283142 | orchestrator | 2025-05-25 01:17:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:32.285190 | orchestrator | 2025-05-25 01:17:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:32.287400 | orchestrator | 2025-05-25 01:17:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:32.287460 | orchestrator | 2025-05-25 01:17:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:35.333379 | orchestrator | 2025-05-25 01:17:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:35.334403 | orchestrator | 2025-05-25 01:17:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:35.336044 | orchestrator | 2025-05-25 01:17:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:35.336070 | orchestrator | 2025-05-25 01:17:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:38.381098 | orchestrator | 2025-05-25 01:17:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:38.382296 | orchestrator | 2025-05-25 01:17:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:38.384767 | orchestrator | 2025-05-25 01:17:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:38.384846 | orchestrator | 2025-05-25 01:17:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:41.428737 | orchestrator | 2025-05-25 01:17:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:41.430252 | orchestrator | 2025-05-25 01:17:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:41.431240 | orchestrator | 2025-05-25 01:17:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:41.431276 | orchestrator | 2025-05-25 01:17:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:44.488739 | orchestrator | 2025-05-25 01:17:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:44.489418 | orchestrator | 2025-05-25 01:17:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:44.492613 | orchestrator | 2025-05-25 01:17:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:44.492695 | orchestrator | 2025-05-25 01:17:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:47.541463 | orchestrator | 2025-05-25 01:17:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:47.542615 | orchestrator | 2025-05-25 01:17:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:47.543920 | orchestrator | 2025-05-25 01:17:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:47.543954 | orchestrator | 2025-05-25 01:17:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:50.601222 | orchestrator | 2025-05-25 01:17:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:50.603587 | orchestrator | 2025-05-25 01:17:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:50.605600 | orchestrator | 2025-05-25 01:17:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:50.605710 | orchestrator | 2025-05-25 01:17:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:53.657223 | orchestrator | 2025-05-25 01:17:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:53.657793 | orchestrator | 2025-05-25 01:17:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:53.660743 | orchestrator | 2025-05-25 01:17:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:53.660823 | orchestrator | 2025-05-25 01:17:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:56.713162 | orchestrator | 2025-05-25 01:17:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:56.714419 | orchestrator | 2025-05-25 01:17:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:56.718912 | orchestrator | 2025-05-25 01:17:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:56.719059 | orchestrator | 2025-05-25 01:17:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:17:59.771292 | orchestrator | 2025-05-25 01:17:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:17:59.772565 | orchestrator | 2025-05-25 01:17:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:17:59.777411 | orchestrator | 2025-05-25 01:17:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:17:59.777462 | orchestrator | 2025-05-25 01:17:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:02.829825 | orchestrator | 2025-05-25 01:18:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:02.831136 | orchestrator | 2025-05-25 01:18:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:02.834326 | orchestrator | 2025-05-25 01:18:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:02.834408 | orchestrator | 2025-05-25 01:18:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:05.880964 | orchestrator | 2025-05-25 01:18:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:05.883285 | orchestrator | 2025-05-25 01:18:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:05.885084 | orchestrator | 2025-05-25 01:18:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:05.885120 | orchestrator | 2025-05-25 01:18:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:08.937123 | orchestrator | 2025-05-25 01:18:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:08.938172 | orchestrator | 2025-05-25 01:18:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:08.939858 | orchestrator | 2025-05-25 01:18:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:08.939886 | orchestrator | 2025-05-25 01:18:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:11.987453 | orchestrator | 2025-05-25 01:18:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:11.988589 | orchestrator | 2025-05-25 01:18:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:11.990437 | orchestrator | 2025-05-25 01:18:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:11.990519 | orchestrator | 2025-05-25 01:18:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:15.032611 | orchestrator | 2025-05-25 01:18:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:15.034288 | orchestrator | 2025-05-25 01:18:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:15.035901 | orchestrator | 2025-05-25 01:18:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:15.035955 | orchestrator | 2025-05-25 01:18:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:18.094353 | orchestrator | 2025-05-25 01:18:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:18.095826 | orchestrator | 2025-05-25 01:18:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:18.099139 | orchestrator | 2025-05-25 01:18:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:18.099182 | orchestrator | 2025-05-25 01:18:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:21.147900 | orchestrator | 2025-05-25 01:18:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:21.150127 | orchestrator | 2025-05-25 01:18:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:21.151994 | orchestrator | 2025-05-25 01:18:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:21.152023 | orchestrator | 2025-05-25 01:18:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:24.199106 | orchestrator | 2025-05-25 01:18:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:24.201343 | orchestrator | 2025-05-25 01:18:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:24.203488 | orchestrator | 2025-05-25 01:18:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:24.203531 | orchestrator | 2025-05-25 01:18:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:27.251412 | orchestrator | 2025-05-25 01:18:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:27.252141 | orchestrator | 2025-05-25 01:18:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:27.253878 | orchestrator | 2025-05-25 01:18:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:27.253953 | orchestrator | 2025-05-25 01:18:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:30.301163 | orchestrator | 2025-05-25 01:18:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:30.301334 | orchestrator | 2025-05-25 01:18:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:30.301841 | orchestrator | 2025-05-25 01:18:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:30.301871 | orchestrator | 2025-05-25 01:18:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:33.351368 | orchestrator | 2025-05-25 01:18:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:33.352340 | orchestrator | 2025-05-25 01:18:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:33.354169 | orchestrator | 2025-05-25 01:18:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:33.354200 | orchestrator | 2025-05-25 01:18:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:36.404900 | orchestrator | 2025-05-25 01:18:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:36.405981 | orchestrator | 2025-05-25 01:18:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:36.408190 | orchestrator | 2025-05-25 01:18:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:36.408225 | orchestrator | 2025-05-25 01:18:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:39.455208 | orchestrator | 2025-05-25 01:18:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:39.457415 | orchestrator | 2025-05-25 01:18:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:39.459361 | orchestrator | 2025-05-25 01:18:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:39.459394 | orchestrator | 2025-05-25 01:18:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:42.509007 | orchestrator | 2025-05-25 01:18:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:42.510276 | orchestrator | 2025-05-25 01:18:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:42.511511 | orchestrator | 2025-05-25 01:18:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:42.511643 | orchestrator | 2025-05-25 01:18:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:45.564885 | orchestrator | 2025-05-25 01:18:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:45.567009 | orchestrator | 2025-05-25 01:18:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:45.569106 | orchestrator | 2025-05-25 01:18:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:45.569139 | orchestrator | 2025-05-25 01:18:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:48.623867 | orchestrator | 2025-05-25 01:18:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:48.625291 | orchestrator | 2025-05-25 01:18:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:48.626680 | orchestrator | 2025-05-25 01:18:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:48.626710 | orchestrator | 2025-05-25 01:18:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:51.670703 | orchestrator | 2025-05-25 01:18:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:51.671993 | orchestrator | 2025-05-25 01:18:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:51.673436 | orchestrator | 2025-05-25 01:18:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:51.673475 | orchestrator | 2025-05-25 01:18:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:54.722641 | orchestrator | 2025-05-25 01:18:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:54.724205 | orchestrator | 2025-05-25 01:18:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:54.726097 | orchestrator | 2025-05-25 01:18:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:54.726181 | orchestrator | 2025-05-25 01:18:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:18:57.774063 | orchestrator | 2025-05-25 01:18:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:18:57.775981 | orchestrator | 2025-05-25 01:18:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:18:57.777539 | orchestrator | 2025-05-25 01:18:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:18:57.777573 | orchestrator | 2025-05-25 01:18:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:00.820218 | orchestrator | 2025-05-25 01:19:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:00.821437 | orchestrator | 2025-05-25 01:19:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:00.823304 | orchestrator | 2025-05-25 01:19:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:00.823373 | orchestrator | 2025-05-25 01:19:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:03.878082 | orchestrator | 2025-05-25 01:19:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:03.879616 | orchestrator | 2025-05-25 01:19:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:03.881435 | orchestrator | 2025-05-25 01:19:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:03.881880 | orchestrator | 2025-05-25 01:19:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:06.929805 | orchestrator | 2025-05-25 01:19:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:06.931194 | orchestrator | 2025-05-25 01:19:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:06.932904 | orchestrator | 2025-05-25 01:19:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:06.933034 | orchestrator | 2025-05-25 01:19:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:09.982263 | orchestrator | 2025-05-25 01:19:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:09.984429 | orchestrator | 2025-05-25 01:19:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:09.985637 | orchestrator | 2025-05-25 01:19:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:09.985710 | orchestrator | 2025-05-25 01:19:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:13.032218 | orchestrator | 2025-05-25 01:19:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:13.033351 | orchestrator | 2025-05-25 01:19:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:13.034792 | orchestrator | 2025-05-25 01:19:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:13.034821 | orchestrator | 2025-05-25 01:19:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:16.081728 | orchestrator | 2025-05-25 01:19:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:16.082010 | orchestrator | 2025-05-25 01:19:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:16.082929 | orchestrator | 2025-05-25 01:19:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:16.082959 | orchestrator | 2025-05-25 01:19:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:19.139225 | orchestrator | 2025-05-25 01:19:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:19.140132 | orchestrator | 2025-05-25 01:19:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:19.141661 | orchestrator | 2025-05-25 01:19:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:19.141829 | orchestrator | 2025-05-25 01:19:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:22.191896 | orchestrator | 2025-05-25 01:19:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:22.193721 | orchestrator | 2025-05-25 01:19:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:22.195888 | orchestrator | 2025-05-25 01:19:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:22.195986 | orchestrator | 2025-05-25 01:19:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:25.248252 | orchestrator | 2025-05-25 01:19:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:25.249200 | orchestrator | 2025-05-25 01:19:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:25.251096 | orchestrator | 2025-05-25 01:19:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:25.251131 | orchestrator | 2025-05-25 01:19:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:28.303586 | orchestrator | 2025-05-25 01:19:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:28.304734 | orchestrator | 2025-05-25 01:19:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:28.306008 | orchestrator | 2025-05-25 01:19:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:28.306125 | orchestrator | 2025-05-25 01:19:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:31.359774 | orchestrator | 2025-05-25 01:19:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:31.360761 | orchestrator | 2025-05-25 01:19:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:31.362211 | orchestrator | 2025-05-25 01:19:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:31.362291 | orchestrator | 2025-05-25 01:19:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:34.414667 | orchestrator | 2025-05-25 01:19:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:34.415089 | orchestrator | 2025-05-25 01:19:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:34.416765 | orchestrator | 2025-05-25 01:19:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:34.416790 | orchestrator | 2025-05-25 01:19:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:37.465766 | orchestrator | 2025-05-25 01:19:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:37.466850 | orchestrator | 2025-05-25 01:19:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:37.468611 | orchestrator | 2025-05-25 01:19:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:37.468667 | orchestrator | 2025-05-25 01:19:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:40.519329 | orchestrator | 2025-05-25 01:19:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:40.519706 | orchestrator | 2025-05-25 01:19:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:40.521171 | orchestrator | 2025-05-25 01:19:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:40.521193 | orchestrator | 2025-05-25 01:19:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:43.559603 | orchestrator | 2025-05-25 01:19:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:43.563609 | orchestrator | 2025-05-25 01:19:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:43.565162 | orchestrator | 2025-05-25 01:19:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:43.565198 | orchestrator | 2025-05-25 01:19:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:46.613206 | orchestrator | 2025-05-25 01:19:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:46.614586 | orchestrator | 2025-05-25 01:19:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:46.616937 | orchestrator | 2025-05-25 01:19:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:46.616975 | orchestrator | 2025-05-25 01:19:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:49.669602 | orchestrator | 2025-05-25 01:19:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:49.671793 | orchestrator | 2025-05-25 01:19:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:49.673575 | orchestrator | 2025-05-25 01:19:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:49.673612 | orchestrator | 2025-05-25 01:19:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:52.726410 | orchestrator | 2025-05-25 01:19:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:52.727378 | orchestrator | 2025-05-25 01:19:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:52.729118 | orchestrator | 2025-05-25 01:19:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:52.729155 | orchestrator | 2025-05-25 01:19:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:55.779873 | orchestrator | 2025-05-25 01:19:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:55.781121 | orchestrator | 2025-05-25 01:19:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:55.784378 | orchestrator | 2025-05-25 01:19:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:55.784427 | orchestrator | 2025-05-25 01:19:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:19:58.834362 | orchestrator | 2025-05-25 01:19:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:19:58.834952 | orchestrator | 2025-05-25 01:19:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:19:58.836429 | orchestrator | 2025-05-25 01:19:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:19:58.836577 | orchestrator | 2025-05-25 01:19:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:01.891461 | orchestrator | 2025-05-25 01:20:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:01.891839 | orchestrator | 2025-05-25 01:20:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:01.893270 | orchestrator | 2025-05-25 01:20:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:01.893298 | orchestrator | 2025-05-25 01:20:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:04.942551 | orchestrator | 2025-05-25 01:20:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:04.943489 | orchestrator | 2025-05-25 01:20:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:04.945730 | orchestrator | 2025-05-25 01:20:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:04.945805 | orchestrator | 2025-05-25 01:20:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:07.997609 | orchestrator | 2025-05-25 01:20:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:07.999165 | orchestrator | 2025-05-25 01:20:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:08.001501 | orchestrator | 2025-05-25 01:20:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:08.001526 | orchestrator | 2025-05-25 01:20:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:11.047623 | orchestrator | 2025-05-25 01:20:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:11.049472 | orchestrator | 2025-05-25 01:20:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:11.050919 | orchestrator | 2025-05-25 01:20:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:11.050983 | orchestrator | 2025-05-25 01:20:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:14.102639 | orchestrator | 2025-05-25 01:20:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:14.103675 | orchestrator | 2025-05-25 01:20:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:14.105276 | orchestrator | 2025-05-25 01:20:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:14.105326 | orchestrator | 2025-05-25 01:20:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:17.146664 | orchestrator | 2025-05-25 01:20:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:17.147678 | orchestrator | 2025-05-25 01:20:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:17.149022 | orchestrator | 2025-05-25 01:20:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:17.149067 | orchestrator | 2025-05-25 01:20:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:20.197958 | orchestrator | 2025-05-25 01:20:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:20.200943 | orchestrator | 2025-05-25 01:20:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:20.201100 | orchestrator | 2025-05-25 01:20:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:20.201121 | orchestrator | 2025-05-25 01:20:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:23.246765 | orchestrator | 2025-05-25 01:20:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:23.250729 | orchestrator | 2025-05-25 01:20:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:23.253987 | orchestrator | 2025-05-25 01:20:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:23.254060 | orchestrator | 2025-05-25 01:20:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:26.307863 | orchestrator | 2025-05-25 01:20:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:26.308469 | orchestrator | 2025-05-25 01:20:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:26.310733 | orchestrator | 2025-05-25 01:20:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:26.310811 | orchestrator | 2025-05-25 01:20:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:29.364910 | orchestrator | 2025-05-25 01:20:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:29.366439 | orchestrator | 2025-05-25 01:20:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:29.368077 | orchestrator | 2025-05-25 01:20:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:29.368119 | orchestrator | 2025-05-25 01:20:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:32.415582 | orchestrator | 2025-05-25 01:20:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:32.416588 | orchestrator | 2025-05-25 01:20:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:32.418432 | orchestrator | 2025-05-25 01:20:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:32.418835 | orchestrator | 2025-05-25 01:20:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:35.466446 | orchestrator | 2025-05-25 01:20:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:35.466989 | orchestrator | 2025-05-25 01:20:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:35.469468 | orchestrator | 2025-05-25 01:20:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:35.469599 | orchestrator | 2025-05-25 01:20:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:38.528505 | orchestrator | 2025-05-25 01:20:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:38.530429 | orchestrator | 2025-05-25 01:20:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:38.532226 | orchestrator | 2025-05-25 01:20:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:38.532277 | orchestrator | 2025-05-25 01:20:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:41.579517 | orchestrator | 2025-05-25 01:20:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:41.580224 | orchestrator | 2025-05-25 01:20:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:41.582145 | orchestrator | 2025-05-25 01:20:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:41.582189 | orchestrator | 2025-05-25 01:20:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:44.630099 | orchestrator | 2025-05-25 01:20:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:44.631451 | orchestrator | 2025-05-25 01:20:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:44.633837 | orchestrator | 2025-05-25 01:20:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:44.634051 | orchestrator | 2025-05-25 01:20:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:47.681828 | orchestrator | 2025-05-25 01:20:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:47.682899 | orchestrator | 2025-05-25 01:20:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:47.684429 | orchestrator | 2025-05-25 01:20:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:47.684462 | orchestrator | 2025-05-25 01:20:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:50.727445 | orchestrator | 2025-05-25 01:20:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:50.728928 | orchestrator | 2025-05-25 01:20:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:50.730451 | orchestrator | 2025-05-25 01:20:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:50.730485 | orchestrator | 2025-05-25 01:20:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:53.783571 | orchestrator | 2025-05-25 01:20:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:53.784954 | orchestrator | 2025-05-25 01:20:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:53.786165 | orchestrator | 2025-05-25 01:20:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:53.786196 | orchestrator | 2025-05-25 01:20:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:56.839457 | orchestrator | 2025-05-25 01:20:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:56.841937 | orchestrator | 2025-05-25 01:20:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:56.843773 | orchestrator | 2025-05-25 01:20:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:56.843837 | orchestrator | 2025-05-25 01:20:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:20:59.902600 | orchestrator | 2025-05-25 01:20:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:20:59.902747 | orchestrator | 2025-05-25 01:20:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:20:59.902775 | orchestrator | 2025-05-25 01:20:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:20:59.902796 | orchestrator | 2025-05-25 01:20:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:02.947281 | orchestrator | 2025-05-25 01:21:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:02.948836 | orchestrator | 2025-05-25 01:21:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:02.951119 | orchestrator | 2025-05-25 01:21:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:02.951139 | orchestrator | 2025-05-25 01:21:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:06.003478 | orchestrator | 2025-05-25 01:21:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:06.004951 | orchestrator | 2025-05-25 01:21:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:06.006216 | orchestrator | 2025-05-25 01:21:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:06.006308 | orchestrator | 2025-05-25 01:21:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:09.058096 | orchestrator | 2025-05-25 01:21:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:09.059239 | orchestrator | 2025-05-25 01:21:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:09.060610 | orchestrator | 2025-05-25 01:21:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:09.060630 | orchestrator | 2025-05-25 01:21:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:12.117441 | orchestrator | 2025-05-25 01:21:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:12.122546 | orchestrator | 2025-05-25 01:21:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:12.123989 | orchestrator | 2025-05-25 01:21:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:12.124013 | orchestrator | 2025-05-25 01:21:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:15.178198 | orchestrator | 2025-05-25 01:21:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:15.179310 | orchestrator | 2025-05-25 01:21:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:15.181209 | orchestrator | 2025-05-25 01:21:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:15.181306 | orchestrator | 2025-05-25 01:21:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:18.238885 | orchestrator | 2025-05-25 01:21:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:18.240354 | orchestrator | 2025-05-25 01:21:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:18.242486 | orchestrator | 2025-05-25 01:21:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:18.242563 | orchestrator | 2025-05-25 01:21:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:21.295978 | orchestrator | 2025-05-25 01:21:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:21.298126 | orchestrator | 2025-05-25 01:21:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:21.299000 | orchestrator | 2025-05-25 01:21:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:21.299030 | orchestrator | 2025-05-25 01:21:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:24.358179 | orchestrator | 2025-05-25 01:21:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:24.360427 | orchestrator | 2025-05-25 01:21:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:24.362483 | orchestrator | 2025-05-25 01:21:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:24.362530 | orchestrator | 2025-05-25 01:21:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:27.415612 | orchestrator | 2025-05-25 01:21:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:27.417037 | orchestrator | 2025-05-25 01:21:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:27.419353 | orchestrator | 2025-05-25 01:21:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:27.419384 | orchestrator | 2025-05-25 01:21:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:30.468413 | orchestrator | 2025-05-25 01:21:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:30.469798 | orchestrator | 2025-05-25 01:21:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:30.471439 | orchestrator | 2025-05-25 01:21:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:30.471515 | orchestrator | 2025-05-25 01:21:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:33.522535 | orchestrator | 2025-05-25 01:21:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:33.523926 | orchestrator | 2025-05-25 01:21:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:33.524975 | orchestrator | 2025-05-25 01:21:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:33.525004 | orchestrator | 2025-05-25 01:21:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:36.575889 | orchestrator | 2025-05-25 01:21:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:36.576890 | orchestrator | 2025-05-25 01:21:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:36.578199 | orchestrator | 2025-05-25 01:21:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:36.578226 | orchestrator | 2025-05-25 01:21:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:39.622540 | orchestrator | 2025-05-25 01:21:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:39.623352 | orchestrator | 2025-05-25 01:21:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:39.624774 | orchestrator | 2025-05-25 01:21:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:39.624834 | orchestrator | 2025-05-25 01:21:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:42.673299 | orchestrator | 2025-05-25 01:21:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:42.675186 | orchestrator | 2025-05-25 01:21:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:42.677202 | orchestrator | 2025-05-25 01:21:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:42.677231 | orchestrator | 2025-05-25 01:21:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:45.712328 | orchestrator | 2025-05-25 01:21:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:45.713991 | orchestrator | 2025-05-25 01:21:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:45.716203 | orchestrator | 2025-05-25 01:21:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:45.716239 | orchestrator | 2025-05-25 01:21:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:48.770532 | orchestrator | 2025-05-25 01:21:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:48.770847 | orchestrator | 2025-05-25 01:21:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:48.772275 | orchestrator | 2025-05-25 01:21:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:48.772301 | orchestrator | 2025-05-25 01:21:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:51.830185 | orchestrator | 2025-05-25 01:21:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:51.831249 | orchestrator | 2025-05-25 01:21:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:51.832665 | orchestrator | 2025-05-25 01:21:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:51.832691 | orchestrator | 2025-05-25 01:21:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:54.879906 | orchestrator | 2025-05-25 01:21:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:54.881103 | orchestrator | 2025-05-25 01:21:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:54.882244 | orchestrator | 2025-05-25 01:21:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:54.882334 | orchestrator | 2025-05-25 01:21:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:21:57.941394 | orchestrator | 2025-05-25 01:21:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:21:57.943504 | orchestrator | 2025-05-25 01:21:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:21:57.944450 | orchestrator | 2025-05-25 01:21:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:21:57.944489 | orchestrator | 2025-05-25 01:21:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:00.997399 | orchestrator | 2025-05-25 01:22:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:00.998545 | orchestrator | 2025-05-25 01:22:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:01.000931 | orchestrator | 2025-05-25 01:22:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:01.000957 | orchestrator | 2025-05-25 01:22:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:04.058597 | orchestrator | 2025-05-25 01:22:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:04.058748 | orchestrator | 2025-05-25 01:22:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:04.058769 | orchestrator | 2025-05-25 01:22:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:04.058781 | orchestrator | 2025-05-25 01:22:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:07.111172 | orchestrator | 2025-05-25 01:22:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:07.113015 | orchestrator | 2025-05-25 01:22:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:07.114962 | orchestrator | 2025-05-25 01:22:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:07.115040 | orchestrator | 2025-05-25 01:22:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:10.166300 | orchestrator | 2025-05-25 01:22:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:10.167246 | orchestrator | 2025-05-25 01:22:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:10.169169 | orchestrator | 2025-05-25 01:22:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:10.169257 | orchestrator | 2025-05-25 01:22:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:13.219021 | orchestrator | 2025-05-25 01:22:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:13.219922 | orchestrator | 2025-05-25 01:22:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:13.220559 | orchestrator | 2025-05-25 01:22:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:13.220595 | orchestrator | 2025-05-25 01:22:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:16.275847 | orchestrator | 2025-05-25 01:22:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:16.278311 | orchestrator | 2025-05-25 01:22:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:16.282199 | orchestrator | 2025-05-25 01:22:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:16.282247 | orchestrator | 2025-05-25 01:22:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:19.335972 | orchestrator | 2025-05-25 01:22:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:19.337191 | orchestrator | 2025-05-25 01:22:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:19.338937 | orchestrator | 2025-05-25 01:22:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:19.338975 | orchestrator | 2025-05-25 01:22:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:22.388797 | orchestrator | 2025-05-25 01:22:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:22.389805 | orchestrator | 2025-05-25 01:22:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:22.391128 | orchestrator | 2025-05-25 01:22:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:22.391149 | orchestrator | 2025-05-25 01:22:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:25.444222 | orchestrator | 2025-05-25 01:22:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:25.444923 | orchestrator | 2025-05-25 01:22:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:25.447156 | orchestrator | 2025-05-25 01:22:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:25.447201 | orchestrator | 2025-05-25 01:22:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:28.496376 | orchestrator | 2025-05-25 01:22:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:28.498231 | orchestrator | 2025-05-25 01:22:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:28.499997 | orchestrator | 2025-05-25 01:22:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:28.500076 | orchestrator | 2025-05-25 01:22:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:31.548463 | orchestrator | 2025-05-25 01:22:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:31.549527 | orchestrator | 2025-05-25 01:22:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:31.550861 | orchestrator | 2025-05-25 01:22:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:31.550892 | orchestrator | 2025-05-25 01:22:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:34.601487 | orchestrator | 2025-05-25 01:22:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:34.602311 | orchestrator | 2025-05-25 01:22:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:34.604326 | orchestrator | 2025-05-25 01:22:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:34.604359 | orchestrator | 2025-05-25 01:22:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:37.651949 | orchestrator | 2025-05-25 01:22:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:37.653295 | orchestrator | 2025-05-25 01:22:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:37.654988 | orchestrator | 2025-05-25 01:22:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:37.655070 | orchestrator | 2025-05-25 01:22:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:40.705298 | orchestrator | 2025-05-25 01:22:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:40.706110 | orchestrator | 2025-05-25 01:22:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:40.707491 | orchestrator | 2025-05-25 01:22:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:40.707528 | orchestrator | 2025-05-25 01:22:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:43.749697 | orchestrator | 2025-05-25 01:22:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:43.750289 | orchestrator | 2025-05-25 01:22:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:43.752547 | orchestrator | 2025-05-25 01:22:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:43.752581 | orchestrator | 2025-05-25 01:22:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:46.795347 | orchestrator | 2025-05-25 01:22:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:46.796485 | orchestrator | 2025-05-25 01:22:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:46.797777 | orchestrator | 2025-05-25 01:22:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:46.797804 | orchestrator | 2025-05-25 01:22:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:49.847324 | orchestrator | 2025-05-25 01:22:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:49.850454 | orchestrator | 2025-05-25 01:22:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:49.852095 | orchestrator | 2025-05-25 01:22:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:49.852144 | orchestrator | 2025-05-25 01:22:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:52.899021 | orchestrator | 2025-05-25 01:22:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:52.899663 | orchestrator | 2025-05-25 01:22:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:52.902404 | orchestrator | 2025-05-25 01:22:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:52.902441 | orchestrator | 2025-05-25 01:22:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:55.952431 | orchestrator | 2025-05-25 01:22:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:55.953677 | orchestrator | 2025-05-25 01:22:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:55.957003 | orchestrator | 2025-05-25 01:22:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:55.957051 | orchestrator | 2025-05-25 01:22:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:22:59.006097 | orchestrator | 2025-05-25 01:22:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:22:59.007431 | orchestrator | 2025-05-25 01:22:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:22:59.009104 | orchestrator | 2025-05-25 01:22:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:22:59.009356 | orchestrator | 2025-05-25 01:22:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:02.055520 | orchestrator | 2025-05-25 01:23:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:02.056570 | orchestrator | 2025-05-25 01:23:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:02.058160 | orchestrator | 2025-05-25 01:23:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:02.058211 | orchestrator | 2025-05-25 01:23:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:05.105833 | orchestrator | 2025-05-25 01:23:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:05.107548 | orchestrator | 2025-05-25 01:23:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:05.109436 | orchestrator | 2025-05-25 01:23:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:05.109960 | orchestrator | 2025-05-25 01:23:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:08.159193 | orchestrator | 2025-05-25 01:23:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:08.161193 | orchestrator | 2025-05-25 01:23:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:08.163224 | orchestrator | 2025-05-25 01:23:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:08.163308 | orchestrator | 2025-05-25 01:23:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:11.218494 | orchestrator | 2025-05-25 01:23:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:11.219054 | orchestrator | 2025-05-25 01:23:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:11.219571 | orchestrator | 2025-05-25 01:23:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:11.219797 | orchestrator | 2025-05-25 01:23:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:14.272872 | orchestrator | 2025-05-25 01:23:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:14.274183 | orchestrator | 2025-05-25 01:23:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:14.275661 | orchestrator | 2025-05-25 01:23:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:14.275704 | orchestrator | 2025-05-25 01:23:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:17.325865 | orchestrator | 2025-05-25 01:23:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:17.327354 | orchestrator | 2025-05-25 01:23:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:17.329275 | orchestrator | 2025-05-25 01:23:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:17.329331 | orchestrator | 2025-05-25 01:23:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:20.377234 | orchestrator | 2025-05-25 01:23:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:20.378124 | orchestrator | 2025-05-25 01:23:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:20.379868 | orchestrator | 2025-05-25 01:23:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:20.379976 | orchestrator | 2025-05-25 01:23:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:23.429270 | orchestrator | 2025-05-25 01:23:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:23.431183 | orchestrator | 2025-05-25 01:23:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:23.432614 | orchestrator | 2025-05-25 01:23:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:23.432652 | orchestrator | 2025-05-25 01:23:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:26.482583 | orchestrator | 2025-05-25 01:23:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:26.482876 | orchestrator | 2025-05-25 01:23:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:26.484686 | orchestrator | 2025-05-25 01:23:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:26.484711 | orchestrator | 2025-05-25 01:23:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:29.536261 | orchestrator | 2025-05-25 01:23:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:29.537567 | orchestrator | 2025-05-25 01:23:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:29.538901 | orchestrator | 2025-05-25 01:23:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:29.538932 | orchestrator | 2025-05-25 01:23:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:32.582813 | orchestrator | 2025-05-25 01:23:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:32.584508 | orchestrator | 2025-05-25 01:23:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:32.586405 | orchestrator | 2025-05-25 01:23:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:32.586435 | orchestrator | 2025-05-25 01:23:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:35.634689 | orchestrator | 2025-05-25 01:23:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:35.636811 | orchestrator | 2025-05-25 01:23:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:35.637981 | orchestrator | 2025-05-25 01:23:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:35.638217 | orchestrator | 2025-05-25 01:23:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:38.690775 | orchestrator | 2025-05-25 01:23:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:38.691999 | orchestrator | 2025-05-25 01:23:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:38.694179 | orchestrator | 2025-05-25 01:23:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:38.694300 | orchestrator | 2025-05-25 01:23:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:41.746389 | orchestrator | 2025-05-25 01:23:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:41.748115 | orchestrator | 2025-05-25 01:23:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:41.749975 | orchestrator | 2025-05-25 01:23:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:41.750266 | orchestrator | 2025-05-25 01:23:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:44.800455 | orchestrator | 2025-05-25 01:23:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:44.802175 | orchestrator | 2025-05-25 01:23:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:44.803144 | orchestrator | 2025-05-25 01:23:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:44.803178 | orchestrator | 2025-05-25 01:23:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:47.854201 | orchestrator | 2025-05-25 01:23:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:47.856074 | orchestrator | 2025-05-25 01:23:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:47.857602 | orchestrator | 2025-05-25 01:23:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:47.857628 | orchestrator | 2025-05-25 01:23:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:50.908283 | orchestrator | 2025-05-25 01:23:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:50.914001 | orchestrator | 2025-05-25 01:23:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:50.915379 | orchestrator | 2025-05-25 01:23:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:50.915492 | orchestrator | 2025-05-25 01:23:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:53.962713 | orchestrator | 2025-05-25 01:23:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:53.962989 | orchestrator | 2025-05-25 01:23:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:53.964358 | orchestrator | 2025-05-25 01:23:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:53.964402 | orchestrator | 2025-05-25 01:23:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:23:57.013630 | orchestrator | 2025-05-25 01:23:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:23:57.014519 | orchestrator | 2025-05-25 01:23:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:23:57.019398 | orchestrator | 2025-05-25 01:23:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:23:57.019452 | orchestrator | 2025-05-25 01:23:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:00.060129 | orchestrator | 2025-05-25 01:24:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:00.060364 | orchestrator | 2025-05-25 01:24:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:00.062225 | orchestrator | 2025-05-25 01:24:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:00.062261 | orchestrator | 2025-05-25 01:24:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:03.115351 | orchestrator | 2025-05-25 01:24:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:03.116552 | orchestrator | 2025-05-25 01:24:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:03.118158 | orchestrator | 2025-05-25 01:24:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:03.118186 | orchestrator | 2025-05-25 01:24:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:06.172239 | orchestrator | 2025-05-25 01:24:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:06.173534 | orchestrator | 2025-05-25 01:24:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:06.175284 | orchestrator | 2025-05-25 01:24:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:06.175323 | orchestrator | 2025-05-25 01:24:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:09.223413 | orchestrator | 2025-05-25 01:24:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:09.226933 | orchestrator | 2025-05-25 01:24:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:09.228664 | orchestrator | 2025-05-25 01:24:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:09.228686 | orchestrator | 2025-05-25 01:24:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:12.273022 | orchestrator | 2025-05-25 01:24:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:12.275222 | orchestrator | 2025-05-25 01:24:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:12.276405 | orchestrator | 2025-05-25 01:24:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:12.276437 | orchestrator | 2025-05-25 01:24:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:15.329555 | orchestrator | 2025-05-25 01:24:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:15.330933 | orchestrator | 2025-05-25 01:24:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:15.332617 | orchestrator | 2025-05-25 01:24:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:15.332854 | orchestrator | 2025-05-25 01:24:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:18.381269 | orchestrator | 2025-05-25 01:24:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:18.382401 | orchestrator | 2025-05-25 01:24:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:18.383631 | orchestrator | 2025-05-25 01:24:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:18.383666 | orchestrator | 2025-05-25 01:24:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:21.430390 | orchestrator | 2025-05-25 01:24:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:21.430570 | orchestrator | 2025-05-25 01:24:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:21.431426 | orchestrator | 2025-05-25 01:24:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:21.431456 | orchestrator | 2025-05-25 01:24:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:24.478588 | orchestrator | 2025-05-25 01:24:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:24.479891 | orchestrator | 2025-05-25 01:24:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:24.481414 | orchestrator | 2025-05-25 01:24:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:24.481426 | orchestrator | 2025-05-25 01:24:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:27.527062 | orchestrator | 2025-05-25 01:24:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:27.527249 | orchestrator | 2025-05-25 01:24:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:27.527997 | orchestrator | 2025-05-25 01:24:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:27.528024 | orchestrator | 2025-05-25 01:24:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:30.574503 | orchestrator | 2025-05-25 01:24:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:30.575527 | orchestrator | 2025-05-25 01:24:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:30.576896 | orchestrator | 2025-05-25 01:24:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:30.576930 | orchestrator | 2025-05-25 01:24:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:33.635337 | orchestrator | 2025-05-25 01:24:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:33.637347 | orchestrator | 2025-05-25 01:24:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:33.639693 | orchestrator | 2025-05-25 01:24:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:33.639790 | orchestrator | 2025-05-25 01:24:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:36.687159 | orchestrator | 2025-05-25 01:24:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:36.688521 | orchestrator | 2025-05-25 01:24:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:36.690290 | orchestrator | 2025-05-25 01:24:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:36.690356 | orchestrator | 2025-05-25 01:24:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:39.737239 | orchestrator | 2025-05-25 01:24:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:39.738835 | orchestrator | 2025-05-25 01:24:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:39.740588 | orchestrator | 2025-05-25 01:24:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:39.740678 | orchestrator | 2025-05-25 01:24:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:42.786499 | orchestrator | 2025-05-25 01:24:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:42.788030 | orchestrator | 2025-05-25 01:24:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:42.789549 | orchestrator | 2025-05-25 01:24:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:42.789593 | orchestrator | 2025-05-25 01:24:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:45.840284 | orchestrator | 2025-05-25 01:24:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:45.841381 | orchestrator | 2025-05-25 01:24:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:45.843286 | orchestrator | 2025-05-25 01:24:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:45.843366 | orchestrator | 2025-05-25 01:24:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:48.898958 | orchestrator | 2025-05-25 01:24:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:48.900662 | orchestrator | 2025-05-25 01:24:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:48.902422 | orchestrator | 2025-05-25 01:24:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:48.902459 | orchestrator | 2025-05-25 01:24:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:51.940150 | orchestrator | 2025-05-25 01:24:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:51.941342 | orchestrator | 2025-05-25 01:24:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:51.942929 | orchestrator | 2025-05-25 01:24:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:51.942954 | orchestrator | 2025-05-25 01:24:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:54.988195 | orchestrator | 2025-05-25 01:24:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:54.989581 | orchestrator | 2025-05-25 01:24:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:54.992408 | orchestrator | 2025-05-25 01:24:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:54.992444 | orchestrator | 2025-05-25 01:24:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:24:58.039524 | orchestrator | 2025-05-25 01:24:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:24:58.041652 | orchestrator | 2025-05-25 01:24:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:24:58.043116 | orchestrator | 2025-05-25 01:24:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:24:58.043161 | orchestrator | 2025-05-25 01:24:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:01.087241 | orchestrator | 2025-05-25 01:25:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:01.087956 | orchestrator | 2025-05-25 01:25:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:01.089569 | orchestrator | 2025-05-25 01:25:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:01.089594 | orchestrator | 2025-05-25 01:25:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:04.136984 | orchestrator | 2025-05-25 01:25:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:04.137864 | orchestrator | 2025-05-25 01:25:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:04.139685 | orchestrator | 2025-05-25 01:25:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:04.140094 | orchestrator | 2025-05-25 01:25:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:07.188680 | orchestrator | 2025-05-25 01:25:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:07.189905 | orchestrator | 2025-05-25 01:25:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:07.191505 | orchestrator | 2025-05-25 01:25:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:07.191550 | orchestrator | 2025-05-25 01:25:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:10.235965 | orchestrator | 2025-05-25 01:25:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:10.238349 | orchestrator | 2025-05-25 01:25:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:10.240135 | orchestrator | 2025-05-25 01:25:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:10.240208 | orchestrator | 2025-05-25 01:25:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:13.295466 | orchestrator | 2025-05-25 01:25:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:13.297562 | orchestrator | 2025-05-25 01:25:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:13.299716 | orchestrator | 2025-05-25 01:25:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:13.299831 | orchestrator | 2025-05-25 01:25:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:16.357536 | orchestrator | 2025-05-25 01:25:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:16.360237 | orchestrator | 2025-05-25 01:25:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:16.362482 | orchestrator | 2025-05-25 01:25:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:16.362514 | orchestrator | 2025-05-25 01:25:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:19.408123 | orchestrator | 2025-05-25 01:25:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:19.408665 | orchestrator | 2025-05-25 01:25:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:19.410483 | orchestrator | 2025-05-25 01:25:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:19.410514 | orchestrator | 2025-05-25 01:25:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:22.465218 | orchestrator | 2025-05-25 01:25:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:22.466140 | orchestrator | 2025-05-25 01:25:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:22.467824 | orchestrator | 2025-05-25 01:25:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:22.467852 | orchestrator | 2025-05-25 01:25:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:25.516968 | orchestrator | 2025-05-25 01:25:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:25.517075 | orchestrator | 2025-05-25 01:25:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:25.518216 | orchestrator | 2025-05-25 01:25:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:25.518244 | orchestrator | 2025-05-25 01:25:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:28.565888 | orchestrator | 2025-05-25 01:25:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:28.567288 | orchestrator | 2025-05-25 01:25:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:28.568677 | orchestrator | 2025-05-25 01:25:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:28.568722 | orchestrator | 2025-05-25 01:25:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:31.617457 | orchestrator | 2025-05-25 01:25:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:31.619408 | orchestrator | 2025-05-25 01:25:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:31.621231 | orchestrator | 2025-05-25 01:25:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:31.621276 | orchestrator | 2025-05-25 01:25:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:34.674449 | orchestrator | 2025-05-25 01:25:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:34.676009 | orchestrator | 2025-05-25 01:25:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:34.677616 | orchestrator | 2025-05-25 01:25:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:34.677784 | orchestrator | 2025-05-25 01:25:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:37.728609 | orchestrator | 2025-05-25 01:25:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:37.729794 | orchestrator | 2025-05-25 01:25:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:37.731218 | orchestrator | 2025-05-25 01:25:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:37.731269 | orchestrator | 2025-05-25 01:25:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:40.783571 | orchestrator | 2025-05-25 01:25:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:40.785047 | orchestrator | 2025-05-25 01:25:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:40.786599 | orchestrator | 2025-05-25 01:25:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:40.786634 | orchestrator | 2025-05-25 01:25:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:43.837229 | orchestrator | 2025-05-25 01:25:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:43.838465 | orchestrator | 2025-05-25 01:25:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:43.839975 | orchestrator | 2025-05-25 01:25:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:43.840010 | orchestrator | 2025-05-25 01:25:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:46.887895 | orchestrator | 2025-05-25 01:25:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:46.888847 | orchestrator | 2025-05-25 01:25:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:46.889670 | orchestrator | 2025-05-25 01:25:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:46.889709 | orchestrator | 2025-05-25 01:25:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:49.938997 | orchestrator | 2025-05-25 01:25:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:49.940407 | orchestrator | 2025-05-25 01:25:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:49.942494 | orchestrator | 2025-05-25 01:25:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:49.942588 | orchestrator | 2025-05-25 01:25:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:52.989674 | orchestrator | 2025-05-25 01:25:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:52.990871 | orchestrator | 2025-05-25 01:25:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:52.992396 | orchestrator | 2025-05-25 01:25:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:52.992423 | orchestrator | 2025-05-25 01:25:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:56.045725 | orchestrator | 2025-05-25 01:25:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:56.047288 | orchestrator | 2025-05-25 01:25:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:56.049144 | orchestrator | 2025-05-25 01:25:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:56.050497 | orchestrator | 2025-05-25 01:25:56 | INFO  | Task 0041ca53-78a1-4217-a26d-f0525f552869 is in state STARTED 2025-05-25 01:25:56.050732 | orchestrator | 2025-05-25 01:25:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:25:59.102417 | orchestrator | 2025-05-25 01:25:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:25:59.104648 | orchestrator | 2025-05-25 01:25:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:25:59.106346 | orchestrator | 2025-05-25 01:25:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:25:59.108213 | orchestrator | 2025-05-25 01:25:59 | INFO  | Task 0041ca53-78a1-4217-a26d-f0525f552869 is in state STARTED 2025-05-25 01:25:59.108250 | orchestrator | 2025-05-25 01:25:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:02.169172 | orchestrator | 2025-05-25 01:26:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:02.172327 | orchestrator | 2025-05-25 01:26:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:02.173783 | orchestrator | 2025-05-25 01:26:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:02.175995 | orchestrator | 2025-05-25 01:26:02 | INFO  | Task 0041ca53-78a1-4217-a26d-f0525f552869 is in state STARTED 2025-05-25 01:26:02.176601 | orchestrator | 2025-05-25 01:26:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:05.225196 | orchestrator | 2025-05-25 01:26:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:05.225299 | orchestrator | 2025-05-25 01:26:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:05.226358 | orchestrator | 2025-05-25 01:26:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:05.227542 | orchestrator | 2025-05-25 01:26:05 | INFO  | Task 0041ca53-78a1-4217-a26d-f0525f552869 is in state STARTED 2025-05-25 01:26:05.227577 | orchestrator | 2025-05-25 01:26:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:08.278774 | orchestrator | 2025-05-25 01:26:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:08.281266 | orchestrator | 2025-05-25 01:26:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:08.283142 | orchestrator | 2025-05-25 01:26:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:08.284338 | orchestrator | 2025-05-25 01:26:08 | INFO  | Task 0041ca53-78a1-4217-a26d-f0525f552869 is in state SUCCESS 2025-05-25 01:26:08.284378 | orchestrator | 2025-05-25 01:26:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:11.331256 | orchestrator | 2025-05-25 01:26:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:11.332705 | orchestrator | 2025-05-25 01:26:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:11.334234 | orchestrator | 2025-05-25 01:26:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:11.334251 | orchestrator | 2025-05-25 01:26:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:14.386668 | orchestrator | 2025-05-25 01:26:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:14.387666 | orchestrator | 2025-05-25 01:26:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:14.389271 | orchestrator | 2025-05-25 01:26:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:14.389321 | orchestrator | 2025-05-25 01:26:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:17.436858 | orchestrator | 2025-05-25 01:26:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:17.437768 | orchestrator | 2025-05-25 01:26:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:17.440243 | orchestrator | 2025-05-25 01:26:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:17.440272 | orchestrator | 2025-05-25 01:26:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:20.500979 | orchestrator | 2025-05-25 01:26:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:20.503694 | orchestrator | 2025-05-25 01:26:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:20.505371 | orchestrator | 2025-05-25 01:26:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:20.505403 | orchestrator | 2025-05-25 01:26:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:23.551319 | orchestrator | 2025-05-25 01:26:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:23.552475 | orchestrator | 2025-05-25 01:26:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:23.554448 | orchestrator | 2025-05-25 01:26:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:23.554572 | orchestrator | 2025-05-25 01:26:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:26.600150 | orchestrator | 2025-05-25 01:26:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:26.600397 | orchestrator | 2025-05-25 01:26:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:26.601601 | orchestrator | 2025-05-25 01:26:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:26.601631 | orchestrator | 2025-05-25 01:26:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:29.642200 | orchestrator | 2025-05-25 01:26:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:29.644139 | orchestrator | 2025-05-25 01:26:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:29.645578 | orchestrator | 2025-05-25 01:26:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:29.645843 | orchestrator | 2025-05-25 01:26:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:32.697199 | orchestrator | 2025-05-25 01:26:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:32.698082 | orchestrator | 2025-05-25 01:26:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:32.701544 | orchestrator | 2025-05-25 01:26:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:32.701574 | orchestrator | 2025-05-25 01:26:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:35.756343 | orchestrator | 2025-05-25 01:26:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:35.759890 | orchestrator | 2025-05-25 01:26:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:35.762369 | orchestrator | 2025-05-25 01:26:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:35.762417 | orchestrator | 2025-05-25 01:26:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:38.810912 | orchestrator | 2025-05-25 01:26:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:38.812960 | orchestrator | 2025-05-25 01:26:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:38.816114 | orchestrator | 2025-05-25 01:26:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:38.816156 | orchestrator | 2025-05-25 01:26:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:41.870438 | orchestrator | 2025-05-25 01:26:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:41.872179 | orchestrator | 2025-05-25 01:26:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:41.874255 | orchestrator | 2025-05-25 01:26:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:41.874537 | orchestrator | 2025-05-25 01:26:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:44.918087 | orchestrator | 2025-05-25 01:26:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:44.919033 | orchestrator | 2025-05-25 01:26:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:44.920254 | orchestrator | 2025-05-25 01:26:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:44.920265 | orchestrator | 2025-05-25 01:26:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:47.976514 | orchestrator | 2025-05-25 01:26:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:47.980237 | orchestrator | 2025-05-25 01:26:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:47.981737 | orchestrator | 2025-05-25 01:26:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:47.981844 | orchestrator | 2025-05-25 01:26:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:51.033655 | orchestrator | 2025-05-25 01:26:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:51.038084 | orchestrator | 2025-05-25 01:26:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:51.041936 | orchestrator | 2025-05-25 01:26:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:51.041971 | orchestrator | 2025-05-25 01:26:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:54.084603 | orchestrator | 2025-05-25 01:26:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:54.086954 | orchestrator | 2025-05-25 01:26:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:54.089026 | orchestrator | 2025-05-25 01:26:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:54.089075 | orchestrator | 2025-05-25 01:26:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:26:57.140517 | orchestrator | 2025-05-25 01:26:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:26:57.141462 | orchestrator | 2025-05-25 01:26:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:26:57.142921 | orchestrator | 2025-05-25 01:26:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:26:57.142956 | orchestrator | 2025-05-25 01:26:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:00.194408 | orchestrator | 2025-05-25 01:27:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:00.194512 | orchestrator | 2025-05-25 01:27:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:00.195229 | orchestrator | 2025-05-25 01:27:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:00.195257 | orchestrator | 2025-05-25 01:27:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:03.247676 | orchestrator | 2025-05-25 01:27:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:03.248428 | orchestrator | 2025-05-25 01:27:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:03.249850 | orchestrator | 2025-05-25 01:27:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:03.249895 | orchestrator | 2025-05-25 01:27:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:06.305098 | orchestrator | 2025-05-25 01:27:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:06.306197 | orchestrator | 2025-05-25 01:27:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:06.307705 | orchestrator | 2025-05-25 01:27:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:06.307769 | orchestrator | 2025-05-25 01:27:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:09.361706 | orchestrator | 2025-05-25 01:27:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:09.362992 | orchestrator | 2025-05-25 01:27:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:09.364490 | orchestrator | 2025-05-25 01:27:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:09.364516 | orchestrator | 2025-05-25 01:27:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:12.414606 | orchestrator | 2025-05-25 01:27:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:12.415213 | orchestrator | 2025-05-25 01:27:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:12.416844 | orchestrator | 2025-05-25 01:27:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:12.416870 | orchestrator | 2025-05-25 01:27:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:15.473987 | orchestrator | 2025-05-25 01:27:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:15.475418 | orchestrator | 2025-05-25 01:27:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:15.477182 | orchestrator | 2025-05-25 01:27:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:15.477206 | orchestrator | 2025-05-25 01:27:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:18.523165 | orchestrator | 2025-05-25 01:27:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:18.525976 | orchestrator | 2025-05-25 01:27:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:18.528211 | orchestrator | 2025-05-25 01:27:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:18.528957 | orchestrator | 2025-05-25 01:27:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:21.574675 | orchestrator | 2025-05-25 01:27:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:21.576012 | orchestrator | 2025-05-25 01:27:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:21.577331 | orchestrator | 2025-05-25 01:27:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:21.577356 | orchestrator | 2025-05-25 01:27:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:24.628069 | orchestrator | 2025-05-25 01:27:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:24.628680 | orchestrator | 2025-05-25 01:27:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:24.630390 | orchestrator | 2025-05-25 01:27:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:24.630487 | orchestrator | 2025-05-25 01:27:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:27.673354 | orchestrator | 2025-05-25 01:27:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:27.674949 | orchestrator | 2025-05-25 01:27:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:27.676530 | orchestrator | 2025-05-25 01:27:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:27.676775 | orchestrator | 2025-05-25 01:27:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:30.725836 | orchestrator | 2025-05-25 01:27:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:30.726992 | orchestrator | 2025-05-25 01:27:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:30.728657 | orchestrator | 2025-05-25 01:27:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:30.728703 | orchestrator | 2025-05-25 01:27:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:33.780558 | orchestrator | 2025-05-25 01:27:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:33.781938 | orchestrator | 2025-05-25 01:27:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:33.783811 | orchestrator | 2025-05-25 01:27:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:33.783906 | orchestrator | 2025-05-25 01:27:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:36.834282 | orchestrator | 2025-05-25 01:27:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:36.835298 | orchestrator | 2025-05-25 01:27:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:36.837136 | orchestrator | 2025-05-25 01:27:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:36.837229 | orchestrator | 2025-05-25 01:27:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:39.884288 | orchestrator | 2025-05-25 01:27:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:39.885742 | orchestrator | 2025-05-25 01:27:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:39.887423 | orchestrator | 2025-05-25 01:27:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:39.887529 | orchestrator | 2025-05-25 01:27:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:42.933141 | orchestrator | 2025-05-25 01:27:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:42.934430 | orchestrator | 2025-05-25 01:27:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:42.935844 | orchestrator | 2025-05-25 01:27:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:42.935868 | orchestrator | 2025-05-25 01:27:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:45.981844 | orchestrator | 2025-05-25 01:27:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:45.982082 | orchestrator | 2025-05-25 01:27:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:45.982519 | orchestrator | 2025-05-25 01:27:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:45.982630 | orchestrator | 2025-05-25 01:27:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:49.032994 | orchestrator | 2025-05-25 01:27:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:49.033570 | orchestrator | 2025-05-25 01:27:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:49.035104 | orchestrator | 2025-05-25 01:27:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:49.035120 | orchestrator | 2025-05-25 01:27:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:52.076821 | orchestrator | 2025-05-25 01:27:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:52.077456 | orchestrator | 2025-05-25 01:27:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:52.078265 | orchestrator | 2025-05-25 01:27:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:52.078295 | orchestrator | 2025-05-25 01:27:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:55.128599 | orchestrator | 2025-05-25 01:27:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:55.130480 | orchestrator | 2025-05-25 01:27:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:55.132004 | orchestrator | 2025-05-25 01:27:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:55.132045 | orchestrator | 2025-05-25 01:27:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:27:58.181309 | orchestrator | 2025-05-25 01:27:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:27:58.182728 | orchestrator | 2025-05-25 01:27:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:27:58.183961 | orchestrator | 2025-05-25 01:27:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:27:58.184012 | orchestrator | 2025-05-25 01:27:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:01.236101 | orchestrator | 2025-05-25 01:28:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:01.237914 | orchestrator | 2025-05-25 01:28:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:01.239859 | orchestrator | 2025-05-25 01:28:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:01.239924 | orchestrator | 2025-05-25 01:28:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:04.290488 | orchestrator | 2025-05-25 01:28:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:04.290664 | orchestrator | 2025-05-25 01:28:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:04.291184 | orchestrator | 2025-05-25 01:28:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:04.291209 | orchestrator | 2025-05-25 01:28:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:07.340664 | orchestrator | 2025-05-25 01:28:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:07.341628 | orchestrator | 2025-05-25 01:28:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:07.343532 | orchestrator | 2025-05-25 01:28:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:07.343587 | orchestrator | 2025-05-25 01:28:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:10.390182 | orchestrator | 2025-05-25 01:28:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:10.391588 | orchestrator | 2025-05-25 01:28:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:10.393287 | orchestrator | 2025-05-25 01:28:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:10.393320 | orchestrator | 2025-05-25 01:28:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:13.449119 | orchestrator | 2025-05-25 01:28:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:13.449743 | orchestrator | 2025-05-25 01:28:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:13.451262 | orchestrator | 2025-05-25 01:28:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:13.451295 | orchestrator | 2025-05-25 01:28:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:16.493954 | orchestrator | 2025-05-25 01:28:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:16.494472 | orchestrator | 2025-05-25 01:28:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:16.495717 | orchestrator | 2025-05-25 01:28:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:16.495768 | orchestrator | 2025-05-25 01:28:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:19.541557 | orchestrator | 2025-05-25 01:28:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:19.542978 | orchestrator | 2025-05-25 01:28:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:19.545649 | orchestrator | 2025-05-25 01:28:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:19.545679 | orchestrator | 2025-05-25 01:28:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:22.595631 | orchestrator | 2025-05-25 01:28:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:22.596326 | orchestrator | 2025-05-25 01:28:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:22.597580 | orchestrator | 2025-05-25 01:28:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:22.597623 | orchestrator | 2025-05-25 01:28:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:25.644674 | orchestrator | 2025-05-25 01:28:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:25.645162 | orchestrator | 2025-05-25 01:28:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:25.646958 | orchestrator | 2025-05-25 01:28:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:25.646999 | orchestrator | 2025-05-25 01:28:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:28.694600 | orchestrator | 2025-05-25 01:28:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:28.695958 | orchestrator | 2025-05-25 01:28:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:28.698197 | orchestrator | 2025-05-25 01:28:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:28.698227 | orchestrator | 2025-05-25 01:28:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:31.751411 | orchestrator | 2025-05-25 01:28:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:31.752524 | orchestrator | 2025-05-25 01:28:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:31.754162 | orchestrator | 2025-05-25 01:28:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:31.754194 | orchestrator | 2025-05-25 01:28:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:34.799851 | orchestrator | 2025-05-25 01:28:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:34.801024 | orchestrator | 2025-05-25 01:28:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:34.802607 | orchestrator | 2025-05-25 01:28:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:34.802630 | orchestrator | 2025-05-25 01:28:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:37.853370 | orchestrator | 2025-05-25 01:28:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:37.854596 | orchestrator | 2025-05-25 01:28:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:37.856886 | orchestrator | 2025-05-25 01:28:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:37.856969 | orchestrator | 2025-05-25 01:28:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:40.904814 | orchestrator | 2025-05-25 01:28:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:40.906107 | orchestrator | 2025-05-25 01:28:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:40.907641 | orchestrator | 2025-05-25 01:28:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:40.907684 | orchestrator | 2025-05-25 01:28:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:43.951973 | orchestrator | 2025-05-25 01:28:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:43.953018 | orchestrator | 2025-05-25 01:28:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:43.954274 | orchestrator | 2025-05-25 01:28:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:43.954301 | orchestrator | 2025-05-25 01:28:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:47.013886 | orchestrator | 2025-05-25 01:28:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:47.016706 | orchestrator | 2025-05-25 01:28:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:47.019009 | orchestrator | 2025-05-25 01:28:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:47.019041 | orchestrator | 2025-05-25 01:28:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:50.070491 | orchestrator | 2025-05-25 01:28:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:50.072548 | orchestrator | 2025-05-25 01:28:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:50.074615 | orchestrator | 2025-05-25 01:28:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:50.075417 | orchestrator | 2025-05-25 01:28:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:53.125045 | orchestrator | 2025-05-25 01:28:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:53.126536 | orchestrator | 2025-05-25 01:28:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:53.128223 | orchestrator | 2025-05-25 01:28:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:53.128305 | orchestrator | 2025-05-25 01:28:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:56.186186 | orchestrator | 2025-05-25 01:28:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:56.187319 | orchestrator | 2025-05-25 01:28:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:56.188803 | orchestrator | 2025-05-25 01:28:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:56.188828 | orchestrator | 2025-05-25 01:28:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:28:59.239322 | orchestrator | 2025-05-25 01:28:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:28:59.239723 | orchestrator | 2025-05-25 01:28:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:28:59.240681 | orchestrator | 2025-05-25 01:28:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:28:59.240708 | orchestrator | 2025-05-25 01:28:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:02.296710 | orchestrator | 2025-05-25 01:29:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:02.298612 | orchestrator | 2025-05-25 01:29:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:02.300322 | orchestrator | 2025-05-25 01:29:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:02.300386 | orchestrator | 2025-05-25 01:29:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:05.348943 | orchestrator | 2025-05-25 01:29:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:05.349467 | orchestrator | 2025-05-25 01:29:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:05.350683 | orchestrator | 2025-05-25 01:29:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:05.350828 | orchestrator | 2025-05-25 01:29:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:08.405109 | orchestrator | 2025-05-25 01:29:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:08.406425 | orchestrator | 2025-05-25 01:29:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:08.409033 | orchestrator | 2025-05-25 01:29:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:08.409062 | orchestrator | 2025-05-25 01:29:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:11.468449 | orchestrator | 2025-05-25 01:29:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:11.469150 | orchestrator | 2025-05-25 01:29:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:11.470347 | orchestrator | 2025-05-25 01:29:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:11.470524 | orchestrator | 2025-05-25 01:29:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:14.526697 | orchestrator | 2025-05-25 01:29:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:14.528427 | orchestrator | 2025-05-25 01:29:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:14.530709 | orchestrator | 2025-05-25 01:29:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:14.530808 | orchestrator | 2025-05-25 01:29:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:17.574137 | orchestrator | 2025-05-25 01:29:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:17.575127 | orchestrator | 2025-05-25 01:29:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:17.576634 | orchestrator | 2025-05-25 01:29:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:17.576664 | orchestrator | 2025-05-25 01:29:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:20.629369 | orchestrator | 2025-05-25 01:29:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:20.633336 | orchestrator | 2025-05-25 01:29:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:20.637017 | orchestrator | 2025-05-25 01:29:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:20.637105 | orchestrator | 2025-05-25 01:29:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:23.687724 | orchestrator | 2025-05-25 01:29:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:23.689916 | orchestrator | 2025-05-25 01:29:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:23.691306 | orchestrator | 2025-05-25 01:29:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:23.691409 | orchestrator | 2025-05-25 01:29:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:26.750141 | orchestrator | 2025-05-25 01:29:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:26.750632 | orchestrator | 2025-05-25 01:29:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:26.752297 | orchestrator | 2025-05-25 01:29:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:26.752328 | orchestrator | 2025-05-25 01:29:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:29.805944 | orchestrator | 2025-05-25 01:29:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:29.807080 | orchestrator | 2025-05-25 01:29:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:29.808441 | orchestrator | 2025-05-25 01:29:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:29.808492 | orchestrator | 2025-05-25 01:29:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:32.861488 | orchestrator | 2025-05-25 01:29:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:32.864127 | orchestrator | 2025-05-25 01:29:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:32.865623 | orchestrator | 2025-05-25 01:29:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:32.865665 | orchestrator | 2025-05-25 01:29:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:35.919337 | orchestrator | 2025-05-25 01:29:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:35.922268 | orchestrator | 2025-05-25 01:29:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:35.924701 | orchestrator | 2025-05-25 01:29:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:35.924869 | orchestrator | 2025-05-25 01:29:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:38.978822 | orchestrator | 2025-05-25 01:29:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:38.981207 | orchestrator | 2025-05-25 01:29:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:38.983899 | orchestrator | 2025-05-25 01:29:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:38.984096 | orchestrator | 2025-05-25 01:29:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:42.038436 | orchestrator | 2025-05-25 01:29:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:42.040144 | orchestrator | 2025-05-25 01:29:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:42.041025 | orchestrator | 2025-05-25 01:29:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:42.041061 | orchestrator | 2025-05-25 01:29:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:45.087130 | orchestrator | 2025-05-25 01:29:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:45.087775 | orchestrator | 2025-05-25 01:29:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:45.088834 | orchestrator | 2025-05-25 01:29:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:45.088860 | orchestrator | 2025-05-25 01:29:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:48.138700 | orchestrator | 2025-05-25 01:29:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:48.139176 | orchestrator | 2025-05-25 01:29:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:48.140729 | orchestrator | 2025-05-25 01:29:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:48.140765 | orchestrator | 2025-05-25 01:29:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:51.194734 | orchestrator | 2025-05-25 01:29:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:51.196075 | orchestrator | 2025-05-25 01:29:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:51.197727 | orchestrator | 2025-05-25 01:29:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:51.197780 | orchestrator | 2025-05-25 01:29:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:54.248044 | orchestrator | 2025-05-25 01:29:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:54.249052 | orchestrator | 2025-05-25 01:29:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:54.250629 | orchestrator | 2025-05-25 01:29:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:54.250740 | orchestrator | 2025-05-25 01:29:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:29:57.307364 | orchestrator | 2025-05-25 01:29:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:29:57.309004 | orchestrator | 2025-05-25 01:29:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:29:57.309948 | orchestrator | 2025-05-25 01:29:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:29:57.310005 | orchestrator | 2025-05-25 01:29:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:00.359628 | orchestrator | 2025-05-25 01:30:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:00.359899 | orchestrator | 2025-05-25 01:30:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:00.360381 | orchestrator | 2025-05-25 01:30:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:00.360472 | orchestrator | 2025-05-25 01:30:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:03.407164 | orchestrator | 2025-05-25 01:30:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:03.407931 | orchestrator | 2025-05-25 01:30:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:03.409146 | orchestrator | 2025-05-25 01:30:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:03.409254 | orchestrator | 2025-05-25 01:30:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:06.460295 | orchestrator | 2025-05-25 01:30:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:06.461904 | orchestrator | 2025-05-25 01:30:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:06.463033 | orchestrator | 2025-05-25 01:30:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:06.463056 | orchestrator | 2025-05-25 01:30:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:09.516372 | orchestrator | 2025-05-25 01:30:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:09.516573 | orchestrator | 2025-05-25 01:30:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:09.517974 | orchestrator | 2025-05-25 01:30:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:09.517997 | orchestrator | 2025-05-25 01:30:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:12.572817 | orchestrator | 2025-05-25 01:30:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:12.573085 | orchestrator | 2025-05-25 01:30:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:12.574605 | orchestrator | 2025-05-25 01:30:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:12.574646 | orchestrator | 2025-05-25 01:30:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:15.621412 | orchestrator | 2025-05-25 01:30:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:15.621518 | orchestrator | 2025-05-25 01:30:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:15.621533 | orchestrator | 2025-05-25 01:30:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:15.621545 | orchestrator | 2025-05-25 01:30:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:18.670552 | orchestrator | 2025-05-25 01:30:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:18.672109 | orchestrator | 2025-05-25 01:30:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:18.675002 | orchestrator | 2025-05-25 01:30:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:18.675338 | orchestrator | 2025-05-25 01:30:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:21.722723 | orchestrator | 2025-05-25 01:30:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:21.723877 | orchestrator | 2025-05-25 01:30:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:21.725079 | orchestrator | 2025-05-25 01:30:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:21.725166 | orchestrator | 2025-05-25 01:30:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:24.776327 | orchestrator | 2025-05-25 01:30:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:24.777213 | orchestrator | 2025-05-25 01:30:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:24.779673 | orchestrator | 2025-05-25 01:30:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:24.780017 | orchestrator | 2025-05-25 01:30:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:27.832061 | orchestrator | 2025-05-25 01:30:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:27.833071 | orchestrator | 2025-05-25 01:30:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:27.835171 | orchestrator | 2025-05-25 01:30:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:27.835204 | orchestrator | 2025-05-25 01:30:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:30.892274 | orchestrator | 2025-05-25 01:30:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:30.892829 | orchestrator | 2025-05-25 01:30:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:30.895812 | orchestrator | 2025-05-25 01:30:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:30.896435 | orchestrator | 2025-05-25 01:30:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:33.951533 | orchestrator | 2025-05-25 01:30:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:33.951724 | orchestrator | 2025-05-25 01:30:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:33.953889 | orchestrator | 2025-05-25 01:30:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:33.953921 | orchestrator | 2025-05-25 01:30:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:37.010308 | orchestrator | 2025-05-25 01:30:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:37.013023 | orchestrator | 2025-05-25 01:30:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:37.014857 | orchestrator | 2025-05-25 01:30:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:37.014896 | orchestrator | 2025-05-25 01:30:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:40.065728 | orchestrator | 2025-05-25 01:30:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:40.066895 | orchestrator | 2025-05-25 01:30:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:40.068580 | orchestrator | 2025-05-25 01:30:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:40.068606 | orchestrator | 2025-05-25 01:30:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:43.119952 | orchestrator | 2025-05-25 01:30:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:43.120891 | orchestrator | 2025-05-25 01:30:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:43.122840 | orchestrator | 2025-05-25 01:30:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:43.122883 | orchestrator | 2025-05-25 01:30:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:46.168961 | orchestrator | 2025-05-25 01:30:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:46.170875 | orchestrator | 2025-05-25 01:30:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:46.172444 | orchestrator | 2025-05-25 01:30:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:46.172475 | orchestrator | 2025-05-25 01:30:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:49.223631 | orchestrator | 2025-05-25 01:30:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:49.224588 | orchestrator | 2025-05-25 01:30:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:49.226197 | orchestrator | 2025-05-25 01:30:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:49.226219 | orchestrator | 2025-05-25 01:30:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:52.273480 | orchestrator | 2025-05-25 01:30:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:52.274151 | orchestrator | 2025-05-25 01:30:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:52.275612 | orchestrator | 2025-05-25 01:30:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:52.275698 | orchestrator | 2025-05-25 01:30:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:55.325767 | orchestrator | 2025-05-25 01:30:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:55.326407 | orchestrator | 2025-05-25 01:30:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:55.327972 | orchestrator | 2025-05-25 01:30:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:55.328011 | orchestrator | 2025-05-25 01:30:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:30:58.377252 | orchestrator | 2025-05-25 01:30:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:30:58.377339 | orchestrator | 2025-05-25 01:30:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:30:58.377349 | orchestrator | 2025-05-25 01:30:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:30:58.377358 | orchestrator | 2025-05-25 01:30:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:01.426423 | orchestrator | 2025-05-25 01:31:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:01.427880 | orchestrator | 2025-05-25 01:31:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:01.429768 | orchestrator | 2025-05-25 01:31:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:01.429835 | orchestrator | 2025-05-25 01:31:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:04.479912 | orchestrator | 2025-05-25 01:31:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:04.480175 | orchestrator | 2025-05-25 01:31:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:04.481082 | orchestrator | 2025-05-25 01:31:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:04.481358 | orchestrator | 2025-05-25 01:31:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:07.528514 | orchestrator | 2025-05-25 01:31:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:07.529202 | orchestrator | 2025-05-25 01:31:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:07.530545 | orchestrator | 2025-05-25 01:31:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:07.530566 | orchestrator | 2025-05-25 01:31:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:10.578285 | orchestrator | 2025-05-25 01:31:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:10.579098 | orchestrator | 2025-05-25 01:31:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:10.580982 | orchestrator | 2025-05-25 01:31:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:10.581014 | orchestrator | 2025-05-25 01:31:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:13.632770 | orchestrator | 2025-05-25 01:31:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:13.633981 | orchestrator | 2025-05-25 01:31:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:13.636158 | orchestrator | 2025-05-25 01:31:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:13.636190 | orchestrator | 2025-05-25 01:31:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:16.688026 | orchestrator | 2025-05-25 01:31:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:16.688247 | orchestrator | 2025-05-25 01:31:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:16.689883 | orchestrator | 2025-05-25 01:31:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:16.690270 | orchestrator | 2025-05-25 01:31:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:19.732192 | orchestrator | 2025-05-25 01:31:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:19.733591 | orchestrator | 2025-05-25 01:31:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:19.735623 | orchestrator | 2025-05-25 01:31:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:19.735658 | orchestrator | 2025-05-25 01:31:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:22.775725 | orchestrator | 2025-05-25 01:31:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:22.776153 | orchestrator | 2025-05-25 01:31:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:22.777469 | orchestrator | 2025-05-25 01:31:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:22.777497 | orchestrator | 2025-05-25 01:31:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:25.823404 | orchestrator | 2025-05-25 01:31:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:25.823597 | orchestrator | 2025-05-25 01:31:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:25.825482 | orchestrator | 2025-05-25 01:31:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:25.825539 | orchestrator | 2025-05-25 01:31:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:28.882253 | orchestrator | 2025-05-25 01:31:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:28.882992 | orchestrator | 2025-05-25 01:31:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:28.884797 | orchestrator | 2025-05-25 01:31:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:28.884820 | orchestrator | 2025-05-25 01:31:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:31.936701 | orchestrator | 2025-05-25 01:31:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:31.937371 | orchestrator | 2025-05-25 01:31:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:31.939632 | orchestrator | 2025-05-25 01:31:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:31.939752 | orchestrator | 2025-05-25 01:31:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:34.992222 | orchestrator | 2025-05-25 01:31:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:34.993106 | orchestrator | 2025-05-25 01:31:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:34.994354 | orchestrator | 2025-05-25 01:31:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:34.994369 | orchestrator | 2025-05-25 01:31:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:38.042615 | orchestrator | 2025-05-25 01:31:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:38.043261 | orchestrator | 2025-05-25 01:31:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:38.044850 | orchestrator | 2025-05-25 01:31:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:38.045021 | orchestrator | 2025-05-25 01:31:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:41.094520 | orchestrator | 2025-05-25 01:31:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:41.096004 | orchestrator | 2025-05-25 01:31:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:41.097002 | orchestrator | 2025-05-25 01:31:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:41.097039 | orchestrator | 2025-05-25 01:31:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:44.151813 | orchestrator | 2025-05-25 01:31:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:44.153327 | orchestrator | 2025-05-25 01:31:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:44.154171 | orchestrator | 2025-05-25 01:31:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:44.154478 | orchestrator | 2025-05-25 01:31:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:47.208573 | orchestrator | 2025-05-25 01:31:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:47.208645 | orchestrator | 2025-05-25 01:31:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:47.211285 | orchestrator | 2025-05-25 01:31:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:47.211301 | orchestrator | 2025-05-25 01:31:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:50.264154 | orchestrator | 2025-05-25 01:31:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:50.264348 | orchestrator | 2025-05-25 01:31:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:50.266239 | orchestrator | 2025-05-25 01:31:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:50.266269 | orchestrator | 2025-05-25 01:31:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:53.319905 | orchestrator | 2025-05-25 01:31:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:53.321332 | orchestrator | 2025-05-25 01:31:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:53.323175 | orchestrator | 2025-05-25 01:31:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:53.323203 | orchestrator | 2025-05-25 01:31:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:56.376300 | orchestrator | 2025-05-25 01:31:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:56.378658 | orchestrator | 2025-05-25 01:31:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:56.381078 | orchestrator | 2025-05-25 01:31:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:56.381114 | orchestrator | 2025-05-25 01:31:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:31:59.436455 | orchestrator | 2025-05-25 01:31:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:31:59.436787 | orchestrator | 2025-05-25 01:31:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:31:59.438107 | orchestrator | 2025-05-25 01:31:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:31:59.438131 | orchestrator | 2025-05-25 01:31:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:02.495691 | orchestrator | 2025-05-25 01:32:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:02.497482 | orchestrator | 2025-05-25 01:32:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:02.498353 | orchestrator | 2025-05-25 01:32:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:02.498389 | orchestrator | 2025-05-25 01:32:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:05.539789 | orchestrator | 2025-05-25 01:32:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:05.539962 | orchestrator | 2025-05-25 01:32:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:05.540510 | orchestrator | 2025-05-25 01:32:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:05.540533 | orchestrator | 2025-05-25 01:32:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:08.595741 | orchestrator | 2025-05-25 01:32:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:08.596787 | orchestrator | 2025-05-25 01:32:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:08.600339 | orchestrator | 2025-05-25 01:32:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:08.600392 | orchestrator | 2025-05-25 01:32:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:11.647961 | orchestrator | 2025-05-25 01:32:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:11.649668 | orchestrator | 2025-05-25 01:32:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:11.651867 | orchestrator | 2025-05-25 01:32:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:11.652022 | orchestrator | 2025-05-25 01:32:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:14.707345 | orchestrator | 2025-05-25 01:32:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:14.707945 | orchestrator | 2025-05-25 01:32:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:14.709886 | orchestrator | 2025-05-25 01:32:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:14.709910 | orchestrator | 2025-05-25 01:32:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:17.762551 | orchestrator | 2025-05-25 01:32:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:17.763296 | orchestrator | 2025-05-25 01:32:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:17.764900 | orchestrator | 2025-05-25 01:32:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:17.764931 | orchestrator | 2025-05-25 01:32:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:20.822221 | orchestrator | 2025-05-25 01:32:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:20.823458 | orchestrator | 2025-05-25 01:32:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:20.825391 | orchestrator | 2025-05-25 01:32:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:20.825433 | orchestrator | 2025-05-25 01:32:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:23.875594 | orchestrator | 2025-05-25 01:32:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:23.878323 | orchestrator | 2025-05-25 01:32:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:23.881450 | orchestrator | 2025-05-25 01:32:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:23.881484 | orchestrator | 2025-05-25 01:32:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:26.933603 | orchestrator | 2025-05-25 01:32:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:26.936316 | orchestrator | 2025-05-25 01:32:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:26.938202 | orchestrator | 2025-05-25 01:32:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:26.938237 | orchestrator | 2025-05-25 01:32:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:29.984995 | orchestrator | 2025-05-25 01:32:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:29.986173 | orchestrator | 2025-05-25 01:32:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:29.989818 | orchestrator | 2025-05-25 01:32:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:29.989875 | orchestrator | 2025-05-25 01:32:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:33.041894 | orchestrator | 2025-05-25 01:32:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:33.045586 | orchestrator | 2025-05-25 01:32:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:33.049095 | orchestrator | 2025-05-25 01:32:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:33.049162 | orchestrator | 2025-05-25 01:32:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:36.103066 | orchestrator | 2025-05-25 01:32:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:36.103219 | orchestrator | 2025-05-25 01:32:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:36.103326 | orchestrator | 2025-05-25 01:32:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:36.103345 | orchestrator | 2025-05-25 01:32:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:39.162638 | orchestrator | 2025-05-25 01:32:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:39.163842 | orchestrator | 2025-05-25 01:32:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:39.164966 | orchestrator | 2025-05-25 01:32:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:39.164993 | orchestrator | 2025-05-25 01:32:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:42.217845 | orchestrator | 2025-05-25 01:32:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:42.219198 | orchestrator | 2025-05-25 01:32:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:42.221423 | orchestrator | 2025-05-25 01:32:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:42.221463 | orchestrator | 2025-05-25 01:32:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:45.277297 | orchestrator | 2025-05-25 01:32:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:45.277456 | orchestrator | 2025-05-25 01:32:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:45.279233 | orchestrator | 2025-05-25 01:32:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:45.279268 | orchestrator | 2025-05-25 01:32:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:48.341004 | orchestrator | 2025-05-25 01:32:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:48.343809 | orchestrator | 2025-05-25 01:32:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:48.345000 | orchestrator | 2025-05-25 01:32:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:48.345062 | orchestrator | 2025-05-25 01:32:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:51.391702 | orchestrator | 2025-05-25 01:32:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:51.393823 | orchestrator | 2025-05-25 01:32:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:51.395268 | orchestrator | 2025-05-25 01:32:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:51.395294 | orchestrator | 2025-05-25 01:32:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:54.451263 | orchestrator | 2025-05-25 01:32:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:54.452615 | orchestrator | 2025-05-25 01:32:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:54.454221 | orchestrator | 2025-05-25 01:32:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:54.454248 | orchestrator | 2025-05-25 01:32:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:32:57.503716 | orchestrator | 2025-05-25 01:32:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:32:57.504065 | orchestrator | 2025-05-25 01:32:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:32:57.505261 | orchestrator | 2025-05-25 01:32:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:32:57.505290 | orchestrator | 2025-05-25 01:32:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:00.560316 | orchestrator | 2025-05-25 01:33:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:00.563049 | orchestrator | 2025-05-25 01:33:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:00.564839 | orchestrator | 2025-05-25 01:33:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:00.564871 | orchestrator | 2025-05-25 01:33:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:03.621672 | orchestrator | 2025-05-25 01:33:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:03.623227 | orchestrator | 2025-05-25 01:33:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:03.625303 | orchestrator | 2025-05-25 01:33:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:03.625329 | orchestrator | 2025-05-25 01:33:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:06.682581 | orchestrator | 2025-05-25 01:33:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:06.684757 | orchestrator | 2025-05-25 01:33:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:06.686699 | orchestrator | 2025-05-25 01:33:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:06.686785 | orchestrator | 2025-05-25 01:33:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:09.736490 | orchestrator | 2025-05-25 01:33:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:09.738443 | orchestrator | 2025-05-25 01:33:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:09.740287 | orchestrator | 2025-05-25 01:33:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:09.740387 | orchestrator | 2025-05-25 01:33:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:12.794327 | orchestrator | 2025-05-25 01:33:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:12.796691 | orchestrator | 2025-05-25 01:33:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:12.798598 | orchestrator | 2025-05-25 01:33:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:12.798823 | orchestrator | 2025-05-25 01:33:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:15.849459 | orchestrator | 2025-05-25 01:33:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:15.850544 | orchestrator | 2025-05-25 01:33:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:15.852326 | orchestrator | 2025-05-25 01:33:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:15.852465 | orchestrator | 2025-05-25 01:33:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:18.909722 | orchestrator | 2025-05-25 01:33:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:18.911059 | orchestrator | 2025-05-25 01:33:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:18.912908 | orchestrator | 2025-05-25 01:33:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:18.913016 | orchestrator | 2025-05-25 01:33:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:21.965986 | orchestrator | 2025-05-25 01:33:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:21.967598 | orchestrator | 2025-05-25 01:33:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:21.970807 | orchestrator | 2025-05-25 01:33:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:21.970839 | orchestrator | 2025-05-25 01:33:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:25.016662 | orchestrator | 2025-05-25 01:33:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:25.023461 | orchestrator | 2025-05-25 01:33:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:25.023985 | orchestrator | 2025-05-25 01:33:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:25.024019 | orchestrator | 2025-05-25 01:33:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:28.074821 | orchestrator | 2025-05-25 01:33:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:28.077538 | orchestrator | 2025-05-25 01:33:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:28.079846 | orchestrator | 2025-05-25 01:33:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:28.079870 | orchestrator | 2025-05-25 01:33:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:31.123962 | orchestrator | 2025-05-25 01:33:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:31.126116 | orchestrator | 2025-05-25 01:33:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:31.129632 | orchestrator | 2025-05-25 01:33:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:31.129729 | orchestrator | 2025-05-25 01:33:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:34.182797 | orchestrator | 2025-05-25 01:33:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:34.183523 | orchestrator | 2025-05-25 01:33:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:34.185009 | orchestrator | 2025-05-25 01:33:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:34.185042 | orchestrator | 2025-05-25 01:33:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:37.241026 | orchestrator | 2025-05-25 01:33:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:37.242063 | orchestrator | 2025-05-25 01:33:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:37.243757 | orchestrator | 2025-05-25 01:33:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:37.243946 | orchestrator | 2025-05-25 01:33:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:40.292371 | orchestrator | 2025-05-25 01:33:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:40.292859 | orchestrator | 2025-05-25 01:33:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:40.292886 | orchestrator | 2025-05-25 01:33:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:40.292898 | orchestrator | 2025-05-25 01:33:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:43.351646 | orchestrator | 2025-05-25 01:33:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:43.354173 | orchestrator | 2025-05-25 01:33:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:43.355603 | orchestrator | 2025-05-25 01:33:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:43.355666 | orchestrator | 2025-05-25 01:33:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:46.408153 | orchestrator | 2025-05-25 01:33:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:46.409583 | orchestrator | 2025-05-25 01:33:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:46.411329 | orchestrator | 2025-05-25 01:33:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:46.411424 | orchestrator | 2025-05-25 01:33:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:49.467829 | orchestrator | 2025-05-25 01:33:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:49.470345 | orchestrator | 2025-05-25 01:33:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:49.472487 | orchestrator | 2025-05-25 01:33:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:49.472535 | orchestrator | 2025-05-25 01:33:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:52.519829 | orchestrator | 2025-05-25 01:33:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:52.521237 | orchestrator | 2025-05-25 01:33:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:52.522890 | orchestrator | 2025-05-25 01:33:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:52.522928 | orchestrator | 2025-05-25 01:33:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:55.580658 | orchestrator | 2025-05-25 01:33:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:55.583820 | orchestrator | 2025-05-25 01:33:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:55.586111 | orchestrator | 2025-05-25 01:33:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:55.586224 | orchestrator | 2025-05-25 01:33:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:33:58.641258 | orchestrator | 2025-05-25 01:33:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:33:58.642462 | orchestrator | 2025-05-25 01:33:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:33:58.644282 | orchestrator | 2025-05-25 01:33:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:33:58.644313 | orchestrator | 2025-05-25 01:33:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:01.697023 | orchestrator | 2025-05-25 01:34:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:01.697126 | orchestrator | 2025-05-25 01:34:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:01.698463 | orchestrator | 2025-05-25 01:34:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:01.698494 | orchestrator | 2025-05-25 01:34:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:04.752664 | orchestrator | 2025-05-25 01:34:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:04.753410 | orchestrator | 2025-05-25 01:34:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:04.755326 | orchestrator | 2025-05-25 01:34:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:04.755418 | orchestrator | 2025-05-25 01:34:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:07.808462 | orchestrator | 2025-05-25 01:34:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:07.810174 | orchestrator | 2025-05-25 01:34:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:07.813430 | orchestrator | 2025-05-25 01:34:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:07.813476 | orchestrator | 2025-05-25 01:34:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:10.861855 | orchestrator | 2025-05-25 01:34:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:10.868566 | orchestrator | 2025-05-25 01:34:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:10.870098 | orchestrator | 2025-05-25 01:34:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:10.870130 | orchestrator | 2025-05-25 01:34:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:13.924900 | orchestrator | 2025-05-25 01:34:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:13.928214 | orchestrator | 2025-05-25 01:34:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:13.930281 | orchestrator | 2025-05-25 01:34:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:13.930323 | orchestrator | 2025-05-25 01:34:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:16.990471 | orchestrator | 2025-05-25 01:34:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:16.991232 | orchestrator | 2025-05-25 01:34:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:16.992812 | orchestrator | 2025-05-25 01:34:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:16.992850 | orchestrator | 2025-05-25 01:34:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:20.045281 | orchestrator | 2025-05-25 01:34:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:20.046687 | orchestrator | 2025-05-25 01:34:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:20.047807 | orchestrator | 2025-05-25 01:34:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:20.047847 | orchestrator | 2025-05-25 01:34:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:23.102937 | orchestrator | 2025-05-25 01:34:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:23.105910 | orchestrator | 2025-05-25 01:34:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:23.108228 | orchestrator | 2025-05-25 01:34:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:23.108257 | orchestrator | 2025-05-25 01:34:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:26.159532 | orchestrator | 2025-05-25 01:34:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:26.160651 | orchestrator | 2025-05-25 01:34:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:26.162731 | orchestrator | 2025-05-25 01:34:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:26.162810 | orchestrator | 2025-05-25 01:34:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:29.213454 | orchestrator | 2025-05-25 01:34:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:29.215747 | orchestrator | 2025-05-25 01:34:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:29.217655 | orchestrator | 2025-05-25 01:34:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:29.217690 | orchestrator | 2025-05-25 01:34:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:32.274209 | orchestrator | 2025-05-25 01:34:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:32.276078 | orchestrator | 2025-05-25 01:34:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:32.277860 | orchestrator | 2025-05-25 01:34:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:32.277879 | orchestrator | 2025-05-25 01:34:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:35.333141 | orchestrator | 2025-05-25 01:34:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:35.334147 | orchestrator | 2025-05-25 01:34:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:35.335810 | orchestrator | 2025-05-25 01:34:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:35.335886 | orchestrator | 2025-05-25 01:34:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:38.388964 | orchestrator | 2025-05-25 01:34:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:38.390779 | orchestrator | 2025-05-25 01:34:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:38.392935 | orchestrator | 2025-05-25 01:34:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:38.392972 | orchestrator | 2025-05-25 01:34:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:41.438822 | orchestrator | 2025-05-25 01:34:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:41.440252 | orchestrator | 2025-05-25 01:34:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:41.443768 | orchestrator | 2025-05-25 01:34:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:41.443853 | orchestrator | 2025-05-25 01:34:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:44.494269 | orchestrator | 2025-05-25 01:34:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:44.495266 | orchestrator | 2025-05-25 01:34:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:44.496469 | orchestrator | 2025-05-25 01:34:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:44.496493 | orchestrator | 2025-05-25 01:34:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:47.537796 | orchestrator | 2025-05-25 01:34:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:47.538498 | orchestrator | 2025-05-25 01:34:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:47.539731 | orchestrator | 2025-05-25 01:34:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:47.539753 | orchestrator | 2025-05-25 01:34:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:50.591308 | orchestrator | 2025-05-25 01:34:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:50.592509 | orchestrator | 2025-05-25 01:34:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:50.594902 | orchestrator | 2025-05-25 01:34:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:50.595305 | orchestrator | 2025-05-25 01:34:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:53.648374 | orchestrator | 2025-05-25 01:34:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:53.649251 | orchestrator | 2025-05-25 01:34:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:53.651381 | orchestrator | 2025-05-25 01:34:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:53.651417 | orchestrator | 2025-05-25 01:34:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:56.704025 | orchestrator | 2025-05-25 01:34:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:56.705199 | orchestrator | 2025-05-25 01:34:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:56.707230 | orchestrator | 2025-05-25 01:34:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:56.707275 | orchestrator | 2025-05-25 01:34:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:34:59.756869 | orchestrator | 2025-05-25 01:34:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:34:59.757002 | orchestrator | 2025-05-25 01:34:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:34:59.757028 | orchestrator | 2025-05-25 01:34:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:34:59.757147 | orchestrator | 2025-05-25 01:34:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:02.806156 | orchestrator | 2025-05-25 01:35:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:02.806943 | orchestrator | 2025-05-25 01:35:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:02.809254 | orchestrator | 2025-05-25 01:35:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:02.809344 | orchestrator | 2025-05-25 01:35:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:05.860230 | orchestrator | 2025-05-25 01:35:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:05.861691 | orchestrator | 2025-05-25 01:35:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:05.863288 | orchestrator | 2025-05-25 01:35:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:05.863320 | orchestrator | 2025-05-25 01:35:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:08.914783 | orchestrator | 2025-05-25 01:35:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:08.916577 | orchestrator | 2025-05-25 01:35:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:08.918550 | orchestrator | 2025-05-25 01:35:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:08.918590 | orchestrator | 2025-05-25 01:35:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:11.966753 | orchestrator | 2025-05-25 01:35:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:11.967622 | orchestrator | 2025-05-25 01:35:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:11.969169 | orchestrator | 2025-05-25 01:35:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:11.969193 | orchestrator | 2025-05-25 01:35:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:15.021766 | orchestrator | 2025-05-25 01:35:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:15.023040 | orchestrator | 2025-05-25 01:35:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:15.024337 | orchestrator | 2025-05-25 01:35:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:15.024368 | orchestrator | 2025-05-25 01:35:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:18.072040 | orchestrator | 2025-05-25 01:35:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:18.074308 | orchestrator | 2025-05-25 01:35:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:18.077118 | orchestrator | 2025-05-25 01:35:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:18.077172 | orchestrator | 2025-05-25 01:35:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:21.131222 | orchestrator | 2025-05-25 01:35:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:21.132123 | orchestrator | 2025-05-25 01:35:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:21.134471 | orchestrator | 2025-05-25 01:35:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:21.134559 | orchestrator | 2025-05-25 01:35:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:24.184073 | orchestrator | 2025-05-25 01:35:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:24.186566 | orchestrator | 2025-05-25 01:35:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:24.189121 | orchestrator | 2025-05-25 01:35:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:24.189145 | orchestrator | 2025-05-25 01:35:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:27.233464 | orchestrator | 2025-05-25 01:35:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:27.234819 | orchestrator | 2025-05-25 01:35:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:27.240671 | orchestrator | 2025-05-25 01:35:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:27.240761 | orchestrator | 2025-05-25 01:35:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:30.281575 | orchestrator | 2025-05-25 01:35:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:30.283164 | orchestrator | 2025-05-25 01:35:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:30.284239 | orchestrator | 2025-05-25 01:35:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:30.284266 | orchestrator | 2025-05-25 01:35:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:33.337438 | orchestrator | 2025-05-25 01:35:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:33.339159 | orchestrator | 2025-05-25 01:35:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:33.340913 | orchestrator | 2025-05-25 01:35:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:33.340943 | orchestrator | 2025-05-25 01:35:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:36.388029 | orchestrator | 2025-05-25 01:35:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:36.389412 | orchestrator | 2025-05-25 01:35:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:36.391226 | orchestrator | 2025-05-25 01:35:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:36.391299 | orchestrator | 2025-05-25 01:35:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:39.440761 | orchestrator | 2025-05-25 01:35:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:39.442329 | orchestrator | 2025-05-25 01:35:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:39.444252 | orchestrator | 2025-05-25 01:35:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:39.444286 | orchestrator | 2025-05-25 01:35:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:42.499604 | orchestrator | 2025-05-25 01:35:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:42.501350 | orchestrator | 2025-05-25 01:35:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:42.503879 | orchestrator | 2025-05-25 01:35:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:42.503908 | orchestrator | 2025-05-25 01:35:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:45.554526 | orchestrator | 2025-05-25 01:35:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:45.555890 | orchestrator | 2025-05-25 01:35:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:45.557089 | orchestrator | 2025-05-25 01:35:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:45.557256 | orchestrator | 2025-05-25 01:35:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:48.606649 | orchestrator | 2025-05-25 01:35:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:48.608399 | orchestrator | 2025-05-25 01:35:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:48.610381 | orchestrator | 2025-05-25 01:35:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:48.610495 | orchestrator | 2025-05-25 01:35:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:51.666387 | orchestrator | 2025-05-25 01:35:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:51.668163 | orchestrator | 2025-05-25 01:35:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:51.669849 | orchestrator | 2025-05-25 01:35:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:51.669876 | orchestrator | 2025-05-25 01:35:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:54.718126 | orchestrator | 2025-05-25 01:35:54 | INFO  | Task fe195565-44b4-41c1-8a87-4a8511c49b43 is in state STARTED 2025-05-25 01:35:54.721399 | orchestrator | 2025-05-25 01:35:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:54.724506 | orchestrator | 2025-05-25 01:35:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:54.727068 | orchestrator | 2025-05-25 01:35:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:54.727201 | orchestrator | 2025-05-25 01:35:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:35:57.785143 | orchestrator | 2025-05-25 01:35:57 | INFO  | Task fe195565-44b4-41c1-8a87-4a8511c49b43 is in state STARTED 2025-05-25 01:35:57.786688 | orchestrator | 2025-05-25 01:35:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:35:57.789986 | orchestrator | 2025-05-25 01:35:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:35:57.793071 | orchestrator | 2025-05-25 01:35:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:35:57.793300 | orchestrator | 2025-05-25 01:35:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:00.844440 | orchestrator | 2025-05-25 01:36:00 | INFO  | Task fe195565-44b4-41c1-8a87-4a8511c49b43 is in state STARTED 2025-05-25 01:36:00.844562 | orchestrator | 2025-05-25 01:36:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:00.844680 | orchestrator | 2025-05-25 01:36:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:00.847353 | orchestrator | 2025-05-25 01:36:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:00.847814 | orchestrator | 2025-05-25 01:36:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:03.905305 | orchestrator | 2025-05-25 01:36:03 | INFO  | Task fe195565-44b4-41c1-8a87-4a8511c49b43 is in state STARTED 2025-05-25 01:36:03.905566 | orchestrator | 2025-05-25 01:36:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:03.907399 | orchestrator | 2025-05-25 01:36:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:03.908567 | orchestrator | 2025-05-25 01:36:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:03.909059 | orchestrator | 2025-05-25 01:36:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:06.962754 | orchestrator | 2025-05-25 01:36:06 | INFO  | Task fe195565-44b4-41c1-8a87-4a8511c49b43 is in state SUCCESS 2025-05-25 01:36:06.964388 | orchestrator | 2025-05-25 01:36:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:06.966367 | orchestrator | 2025-05-25 01:36:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:06.968573 | orchestrator | 2025-05-25 01:36:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:06.968639 | orchestrator | 2025-05-25 01:36:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:10.020971 | orchestrator | 2025-05-25 01:36:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:10.021682 | orchestrator | 2025-05-25 01:36:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:10.023446 | orchestrator | 2025-05-25 01:36:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:10.023491 | orchestrator | 2025-05-25 01:36:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:13.072804 | orchestrator | 2025-05-25 01:36:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:13.075131 | orchestrator | 2025-05-25 01:36:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:13.076081 | orchestrator | 2025-05-25 01:36:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:13.076105 | orchestrator | 2025-05-25 01:36:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:16.127419 | orchestrator | 2025-05-25 01:36:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:16.128857 | orchestrator | 2025-05-25 01:36:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:16.130253 | orchestrator | 2025-05-25 01:36:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:16.130278 | orchestrator | 2025-05-25 01:36:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:19.182721 | orchestrator | 2025-05-25 01:36:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:19.184955 | orchestrator | 2025-05-25 01:36:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:19.187081 | orchestrator | 2025-05-25 01:36:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:19.187119 | orchestrator | 2025-05-25 01:36:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:22.236425 | orchestrator | 2025-05-25 01:36:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:22.236572 | orchestrator | 2025-05-25 01:36:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:22.237935 | orchestrator | 2025-05-25 01:36:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:22.238533 | orchestrator | 2025-05-25 01:36:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:25.292152 | orchestrator | 2025-05-25 01:36:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:25.293253 | orchestrator | 2025-05-25 01:36:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:25.294706 | orchestrator | 2025-05-25 01:36:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:25.294810 | orchestrator | 2025-05-25 01:36:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:28.353954 | orchestrator | 2025-05-25 01:36:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:28.356183 | orchestrator | 2025-05-25 01:36:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:28.361966 | orchestrator | 2025-05-25 01:36:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:28.362122 | orchestrator | 2025-05-25 01:36:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:31.401930 | orchestrator | 2025-05-25 01:36:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:31.402614 | orchestrator | 2025-05-25 01:36:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:31.403955 | orchestrator | 2025-05-25 01:36:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:31.404053 | orchestrator | 2025-05-25 01:36:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:34.458875 | orchestrator | 2025-05-25 01:36:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:34.459988 | orchestrator | 2025-05-25 01:36:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:34.461410 | orchestrator | 2025-05-25 01:36:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:34.461461 | orchestrator | 2025-05-25 01:36:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:37.514334 | orchestrator | 2025-05-25 01:36:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:37.515556 | orchestrator | 2025-05-25 01:36:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:37.517132 | orchestrator | 2025-05-25 01:36:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:37.517315 | orchestrator | 2025-05-25 01:36:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:40.570399 | orchestrator | 2025-05-25 01:36:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:40.571709 | orchestrator | 2025-05-25 01:36:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:40.573132 | orchestrator | 2025-05-25 01:36:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:40.573297 | orchestrator | 2025-05-25 01:36:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:43.622878 | orchestrator | 2025-05-25 01:36:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:43.625223 | orchestrator | 2025-05-25 01:36:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:43.627204 | orchestrator | 2025-05-25 01:36:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:43.627241 | orchestrator | 2025-05-25 01:36:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:46.679612 | orchestrator | 2025-05-25 01:36:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:46.681440 | orchestrator | 2025-05-25 01:36:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:46.683239 | orchestrator | 2025-05-25 01:36:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:46.683300 | orchestrator | 2025-05-25 01:36:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:49.727567 | orchestrator | 2025-05-25 01:36:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:49.729647 | orchestrator | 2025-05-25 01:36:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:49.731638 | orchestrator | 2025-05-25 01:36:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:49.731717 | orchestrator | 2025-05-25 01:36:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:52.782285 | orchestrator | 2025-05-25 01:36:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:52.784152 | orchestrator | 2025-05-25 01:36:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:52.785723 | orchestrator | 2025-05-25 01:36:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:52.785752 | orchestrator | 2025-05-25 01:36:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:55.827590 | orchestrator | 2025-05-25 01:36:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:55.828617 | orchestrator | 2025-05-25 01:36:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:55.830252 | orchestrator | 2025-05-25 01:36:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:55.830410 | orchestrator | 2025-05-25 01:36:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:36:58.877383 | orchestrator | 2025-05-25 01:36:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:36:58.878574 | orchestrator | 2025-05-25 01:36:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:36:58.879566 | orchestrator | 2025-05-25 01:36:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:36:58.879592 | orchestrator | 2025-05-25 01:36:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:01.918532 | orchestrator | 2025-05-25 01:37:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:01.919402 | orchestrator | 2025-05-25 01:37:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:01.921284 | orchestrator | 2025-05-25 01:37:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:01.921307 | orchestrator | 2025-05-25 01:37:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:04.972453 | orchestrator | 2025-05-25 01:37:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:04.973884 | orchestrator | 2025-05-25 01:37:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:04.976426 | orchestrator | 2025-05-25 01:37:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:04.976567 | orchestrator | 2025-05-25 01:37:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:08.039767 | orchestrator | 2025-05-25 01:37:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:08.042780 | orchestrator | 2025-05-25 01:37:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:08.044465 | orchestrator | 2025-05-25 01:37:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:08.044489 | orchestrator | 2025-05-25 01:37:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:11.097580 | orchestrator | 2025-05-25 01:37:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:11.098751 | orchestrator | 2025-05-25 01:37:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:11.100265 | orchestrator | 2025-05-25 01:37:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:11.100293 | orchestrator | 2025-05-25 01:37:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:14.154117 | orchestrator | 2025-05-25 01:37:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:14.155710 | orchestrator | 2025-05-25 01:37:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:14.157248 | orchestrator | 2025-05-25 01:37:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:14.157297 | orchestrator | 2025-05-25 01:37:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:17.209683 | orchestrator | 2025-05-25 01:37:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:17.210960 | orchestrator | 2025-05-25 01:37:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:17.213655 | orchestrator | 2025-05-25 01:37:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:17.213799 | orchestrator | 2025-05-25 01:37:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:20.256326 | orchestrator | 2025-05-25 01:37:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:20.259854 | orchestrator | 2025-05-25 01:37:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:20.262179 | orchestrator | 2025-05-25 01:37:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:20.262294 | orchestrator | 2025-05-25 01:37:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:23.312854 | orchestrator | 2025-05-25 01:37:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:23.313300 | orchestrator | 2025-05-25 01:37:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:23.314007 | orchestrator | 2025-05-25 01:37:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:23.314127 | orchestrator | 2025-05-25 01:37:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:26.358405 | orchestrator | 2025-05-25 01:37:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:26.359894 | orchestrator | 2025-05-25 01:37:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:26.361530 | orchestrator | 2025-05-25 01:37:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:26.361553 | orchestrator | 2025-05-25 01:37:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:29.414492 | orchestrator | 2025-05-25 01:37:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:29.416505 | orchestrator | 2025-05-25 01:37:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:29.419047 | orchestrator | 2025-05-25 01:37:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:29.419093 | orchestrator | 2025-05-25 01:37:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:32.470304 | orchestrator | 2025-05-25 01:37:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:32.470819 | orchestrator | 2025-05-25 01:37:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:32.472580 | orchestrator | 2025-05-25 01:37:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:32.472707 | orchestrator | 2025-05-25 01:37:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:35.524009 | orchestrator | 2025-05-25 01:37:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:35.525348 | orchestrator | 2025-05-25 01:37:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:35.527623 | orchestrator | 2025-05-25 01:37:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:35.527664 | orchestrator | 2025-05-25 01:37:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:38.577839 | orchestrator | 2025-05-25 01:37:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:38.578922 | orchestrator | 2025-05-25 01:37:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:38.579858 | orchestrator | 2025-05-25 01:37:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:38.579888 | orchestrator | 2025-05-25 01:37:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:41.631264 | orchestrator | 2025-05-25 01:37:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:41.632132 | orchestrator | 2025-05-25 01:37:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:41.633685 | orchestrator | 2025-05-25 01:37:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:41.633767 | orchestrator | 2025-05-25 01:37:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:44.685257 | orchestrator | 2025-05-25 01:37:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:44.686432 | orchestrator | 2025-05-25 01:37:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:44.688543 | orchestrator | 2025-05-25 01:37:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:44.688604 | orchestrator | 2025-05-25 01:37:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:47.733718 | orchestrator | 2025-05-25 01:37:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:47.735078 | orchestrator | 2025-05-25 01:37:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:47.736919 | orchestrator | 2025-05-25 01:37:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:47.736953 | orchestrator | 2025-05-25 01:37:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:50.787402 | orchestrator | 2025-05-25 01:37:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:50.789534 | orchestrator | 2025-05-25 01:37:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:50.792093 | orchestrator | 2025-05-25 01:37:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:50.792146 | orchestrator | 2025-05-25 01:37:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:53.841571 | orchestrator | 2025-05-25 01:37:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:53.843538 | orchestrator | 2025-05-25 01:37:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:53.845377 | orchestrator | 2025-05-25 01:37:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:53.845401 | orchestrator | 2025-05-25 01:37:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:56.893937 | orchestrator | 2025-05-25 01:37:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:56.894841 | orchestrator | 2025-05-25 01:37:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:56.895853 | orchestrator | 2025-05-25 01:37:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:56.895876 | orchestrator | 2025-05-25 01:37:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:37:59.949095 | orchestrator | 2025-05-25 01:37:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:37:59.949287 | orchestrator | 2025-05-25 01:37:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:37:59.950594 | orchestrator | 2025-05-25 01:37:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:37:59.950618 | orchestrator | 2025-05-25 01:37:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:03.000738 | orchestrator | 2025-05-25 01:38:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:03.001786 | orchestrator | 2025-05-25 01:38:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:03.004378 | orchestrator | 2025-05-25 01:38:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:03.004414 | orchestrator | 2025-05-25 01:38:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:06.053718 | orchestrator | 2025-05-25 01:38:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:06.054642 | orchestrator | 2025-05-25 01:38:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:06.056113 | orchestrator | 2025-05-25 01:38:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:06.056195 | orchestrator | 2025-05-25 01:38:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:09.110477 | orchestrator | 2025-05-25 01:38:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:09.111302 | orchestrator | 2025-05-25 01:38:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:09.113208 | orchestrator | 2025-05-25 01:38:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:09.113288 | orchestrator | 2025-05-25 01:38:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:12.160442 | orchestrator | 2025-05-25 01:38:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:12.161652 | orchestrator | 2025-05-25 01:38:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:12.164063 | orchestrator | 2025-05-25 01:38:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:12.164161 | orchestrator | 2025-05-25 01:38:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:15.218592 | orchestrator | 2025-05-25 01:38:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:15.218764 | orchestrator | 2025-05-25 01:38:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:15.219707 | orchestrator | 2025-05-25 01:38:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:15.219749 | orchestrator | 2025-05-25 01:38:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:18.275049 | orchestrator | 2025-05-25 01:38:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:18.277527 | orchestrator | 2025-05-25 01:38:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:18.277594 | orchestrator | 2025-05-25 01:38:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:18.277624 | orchestrator | 2025-05-25 01:38:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:21.321359 | orchestrator | 2025-05-25 01:38:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:21.322530 | orchestrator | 2025-05-25 01:38:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:21.324304 | orchestrator | 2025-05-25 01:38:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:21.324327 | orchestrator | 2025-05-25 01:38:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:24.373204 | orchestrator | 2025-05-25 01:38:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:24.374552 | orchestrator | 2025-05-25 01:38:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:24.376393 | orchestrator | 2025-05-25 01:38:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:24.376430 | orchestrator | 2025-05-25 01:38:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:27.422213 | orchestrator | 2025-05-25 01:38:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:27.423110 | orchestrator | 2025-05-25 01:38:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:27.424753 | orchestrator | 2025-05-25 01:38:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:27.424816 | orchestrator | 2025-05-25 01:38:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:30.474979 | orchestrator | 2025-05-25 01:38:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:30.476501 | orchestrator | 2025-05-25 01:38:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:30.478758 | orchestrator | 2025-05-25 01:38:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:30.478881 | orchestrator | 2025-05-25 01:38:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:33.524643 | orchestrator | 2025-05-25 01:38:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:33.526222 | orchestrator | 2025-05-25 01:38:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:33.527418 | orchestrator | 2025-05-25 01:38:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:33.527462 | orchestrator | 2025-05-25 01:38:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:36.584260 | orchestrator | 2025-05-25 01:38:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:36.586133 | orchestrator | 2025-05-25 01:38:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:36.587777 | orchestrator | 2025-05-25 01:38:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:36.587816 | orchestrator | 2025-05-25 01:38:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:39.639514 | orchestrator | 2025-05-25 01:38:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:39.641151 | orchestrator | 2025-05-25 01:38:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:39.642792 | orchestrator | 2025-05-25 01:38:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:39.642907 | orchestrator | 2025-05-25 01:38:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:42.692137 | orchestrator | 2025-05-25 01:38:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:42.694245 | orchestrator | 2025-05-25 01:38:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:42.696188 | orchestrator | 2025-05-25 01:38:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:42.696220 | orchestrator | 2025-05-25 01:38:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:45.743717 | orchestrator | 2025-05-25 01:38:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:45.745053 | orchestrator | 2025-05-25 01:38:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:45.747067 | orchestrator | 2025-05-25 01:38:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:45.747132 | orchestrator | 2025-05-25 01:38:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:48.790417 | orchestrator | 2025-05-25 01:38:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:48.792269 | orchestrator | 2025-05-25 01:38:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:48.794154 | orchestrator | 2025-05-25 01:38:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:48.794234 | orchestrator | 2025-05-25 01:38:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:51.841379 | orchestrator | 2025-05-25 01:38:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:51.842401 | orchestrator | 2025-05-25 01:38:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:51.844031 | orchestrator | 2025-05-25 01:38:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:51.844110 | orchestrator | 2025-05-25 01:38:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:54.893697 | orchestrator | 2025-05-25 01:38:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:54.894652 | orchestrator | 2025-05-25 01:38:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:54.896329 | orchestrator | 2025-05-25 01:38:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:54.896351 | orchestrator | 2025-05-25 01:38:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:38:57.945355 | orchestrator | 2025-05-25 01:38:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:38:57.947258 | orchestrator | 2025-05-25 01:38:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:38:57.948660 | orchestrator | 2025-05-25 01:38:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:38:57.948763 | orchestrator | 2025-05-25 01:38:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:01.003956 | orchestrator | 2025-05-25 01:39:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:01.006239 | orchestrator | 2025-05-25 01:39:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:01.009102 | orchestrator | 2025-05-25 01:39:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:01.009327 | orchestrator | 2025-05-25 01:39:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:04.057237 | orchestrator | 2025-05-25 01:39:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:04.057915 | orchestrator | 2025-05-25 01:39:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:04.061548 | orchestrator | 2025-05-25 01:39:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:04.061712 | orchestrator | 2025-05-25 01:39:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:07.112143 | orchestrator | 2025-05-25 01:39:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:07.113773 | orchestrator | 2025-05-25 01:39:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:07.117211 | orchestrator | 2025-05-25 01:39:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:07.117257 | orchestrator | 2025-05-25 01:39:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:10.171072 | orchestrator | 2025-05-25 01:39:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:10.174150 | orchestrator | 2025-05-25 01:39:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:10.175918 | orchestrator | 2025-05-25 01:39:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:10.176006 | orchestrator | 2025-05-25 01:39:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:13.226126 | orchestrator | 2025-05-25 01:39:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:13.230399 | orchestrator | 2025-05-25 01:39:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:13.232285 | orchestrator | 2025-05-25 01:39:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:13.232403 | orchestrator | 2025-05-25 01:39:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:16.283585 | orchestrator | 2025-05-25 01:39:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:16.286946 | orchestrator | 2025-05-25 01:39:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:16.288514 | orchestrator | 2025-05-25 01:39:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:16.288632 | orchestrator | 2025-05-25 01:39:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:19.341626 | orchestrator | 2025-05-25 01:39:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:19.342680 | orchestrator | 2025-05-25 01:39:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:19.344049 | orchestrator | 2025-05-25 01:39:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:19.344456 | orchestrator | 2025-05-25 01:39:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:22.393657 | orchestrator | 2025-05-25 01:39:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:22.395131 | orchestrator | 2025-05-25 01:39:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:22.396417 | orchestrator | 2025-05-25 01:39:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:22.396596 | orchestrator | 2025-05-25 01:39:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:25.453425 | orchestrator | 2025-05-25 01:39:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:25.454956 | orchestrator | 2025-05-25 01:39:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:25.457228 | orchestrator | 2025-05-25 01:39:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:25.457271 | orchestrator | 2025-05-25 01:39:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:28.509296 | orchestrator | 2025-05-25 01:39:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:28.511212 | orchestrator | 2025-05-25 01:39:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:28.513186 | orchestrator | 2025-05-25 01:39:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:28.513256 | orchestrator | 2025-05-25 01:39:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:31.562392 | orchestrator | 2025-05-25 01:39:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:31.564532 | orchestrator | 2025-05-25 01:39:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:31.568894 | orchestrator | 2025-05-25 01:39:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:31.568932 | orchestrator | 2025-05-25 01:39:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:34.624439 | orchestrator | 2025-05-25 01:39:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:34.625388 | orchestrator | 2025-05-25 01:39:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:34.626491 | orchestrator | 2025-05-25 01:39:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:34.626580 | orchestrator | 2025-05-25 01:39:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:37.685621 | orchestrator | 2025-05-25 01:39:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:37.690365 | orchestrator | 2025-05-25 01:39:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:37.693008 | orchestrator | 2025-05-25 01:39:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:37.693329 | orchestrator | 2025-05-25 01:39:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:40.740044 | orchestrator | 2025-05-25 01:39:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:40.742160 | orchestrator | 2025-05-25 01:39:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:40.745060 | orchestrator | 2025-05-25 01:39:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:40.745114 | orchestrator | 2025-05-25 01:39:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:43.800285 | orchestrator | 2025-05-25 01:39:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:43.801074 | orchestrator | 2025-05-25 01:39:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:43.802362 | orchestrator | 2025-05-25 01:39:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:43.802396 | orchestrator | 2025-05-25 01:39:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:46.853656 | orchestrator | 2025-05-25 01:39:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:46.854513 | orchestrator | 2025-05-25 01:39:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:46.855516 | orchestrator | 2025-05-25 01:39:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:46.855539 | orchestrator | 2025-05-25 01:39:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:49.908406 | orchestrator | 2025-05-25 01:39:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:49.912315 | orchestrator | 2025-05-25 01:39:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:49.914502 | orchestrator | 2025-05-25 01:39:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:49.914596 | orchestrator | 2025-05-25 01:39:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:52.970366 | orchestrator | 2025-05-25 01:39:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:52.971759 | orchestrator | 2025-05-25 01:39:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:52.973095 | orchestrator | 2025-05-25 01:39:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:52.973119 | orchestrator | 2025-05-25 01:39:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:56.025858 | orchestrator | 2025-05-25 01:39:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:56.027187 | orchestrator | 2025-05-25 01:39:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:56.029550 | orchestrator | 2025-05-25 01:39:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:56.029561 | orchestrator | 2025-05-25 01:39:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:39:59.069535 | orchestrator | 2025-05-25 01:39:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:39:59.070446 | orchestrator | 2025-05-25 01:39:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:39:59.072356 | orchestrator | 2025-05-25 01:39:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:39:59.072383 | orchestrator | 2025-05-25 01:39:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:02.120648 | orchestrator | 2025-05-25 01:40:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:02.121110 | orchestrator | 2025-05-25 01:40:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:02.123125 | orchestrator | 2025-05-25 01:40:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:02.123147 | orchestrator | 2025-05-25 01:40:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:05.170308 | orchestrator | 2025-05-25 01:40:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:05.170813 | orchestrator | 2025-05-25 01:40:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:05.171267 | orchestrator | 2025-05-25 01:40:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:05.171298 | orchestrator | 2025-05-25 01:40:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:08.218087 | orchestrator | 2025-05-25 01:40:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:08.219474 | orchestrator | 2025-05-25 01:40:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:08.221147 | orchestrator | 2025-05-25 01:40:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:08.221176 | orchestrator | 2025-05-25 01:40:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:11.269283 | orchestrator | 2025-05-25 01:40:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:11.270827 | orchestrator | 2025-05-25 01:40:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:11.272748 | orchestrator | 2025-05-25 01:40:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:11.272790 | orchestrator | 2025-05-25 01:40:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:14.324732 | orchestrator | 2025-05-25 01:40:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:14.326266 | orchestrator | 2025-05-25 01:40:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:14.328335 | orchestrator | 2025-05-25 01:40:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:14.328387 | orchestrator | 2025-05-25 01:40:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:17.379102 | orchestrator | 2025-05-25 01:40:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:17.380248 | orchestrator | 2025-05-25 01:40:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:17.381394 | orchestrator | 2025-05-25 01:40:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:17.381443 | orchestrator | 2025-05-25 01:40:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:20.427302 | orchestrator | 2025-05-25 01:40:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:20.428841 | orchestrator | 2025-05-25 01:40:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:20.430840 | orchestrator | 2025-05-25 01:40:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:20.430873 | orchestrator | 2025-05-25 01:40:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:23.479141 | orchestrator | 2025-05-25 01:40:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:23.480692 | orchestrator | 2025-05-25 01:40:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:23.482343 | orchestrator | 2025-05-25 01:40:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:23.482378 | orchestrator | 2025-05-25 01:40:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:26.529878 | orchestrator | 2025-05-25 01:40:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:26.530492 | orchestrator | 2025-05-25 01:40:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:26.532117 | orchestrator | 2025-05-25 01:40:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:26.532183 | orchestrator | 2025-05-25 01:40:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:29.584506 | orchestrator | 2025-05-25 01:40:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:29.586131 | orchestrator | 2025-05-25 01:40:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:29.587444 | orchestrator | 2025-05-25 01:40:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:29.587473 | orchestrator | 2025-05-25 01:40:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:32.639959 | orchestrator | 2025-05-25 01:40:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:32.641417 | orchestrator | 2025-05-25 01:40:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:32.642644 | orchestrator | 2025-05-25 01:40:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:32.642676 | orchestrator | 2025-05-25 01:40:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:35.695003 | orchestrator | 2025-05-25 01:40:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:35.697229 | orchestrator | 2025-05-25 01:40:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:35.699338 | orchestrator | 2025-05-25 01:40:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:35.699382 | orchestrator | 2025-05-25 01:40:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:38.749436 | orchestrator | 2025-05-25 01:40:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:38.750683 | orchestrator | 2025-05-25 01:40:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:38.752555 | orchestrator | 2025-05-25 01:40:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:38.752602 | orchestrator | 2025-05-25 01:40:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:41.808348 | orchestrator | 2025-05-25 01:40:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:41.810242 | orchestrator | 2025-05-25 01:40:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:41.811550 | orchestrator | 2025-05-25 01:40:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:41.811578 | orchestrator | 2025-05-25 01:40:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:44.862151 | orchestrator | 2025-05-25 01:40:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:44.864178 | orchestrator | 2025-05-25 01:40:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:44.866634 | orchestrator | 2025-05-25 01:40:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:44.866679 | orchestrator | 2025-05-25 01:40:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:47.913840 | orchestrator | 2025-05-25 01:40:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:47.914976 | orchestrator | 2025-05-25 01:40:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:47.916174 | orchestrator | 2025-05-25 01:40:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:47.916209 | orchestrator | 2025-05-25 01:40:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:50.965566 | orchestrator | 2025-05-25 01:40:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:50.967680 | orchestrator | 2025-05-25 01:40:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:50.969687 | orchestrator | 2025-05-25 01:40:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:50.969813 | orchestrator | 2025-05-25 01:40:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:54.019249 | orchestrator | 2025-05-25 01:40:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:54.020185 | orchestrator | 2025-05-25 01:40:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:54.022468 | orchestrator | 2025-05-25 01:40:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:54.022528 | orchestrator | 2025-05-25 01:40:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:40:57.062385 | orchestrator | 2025-05-25 01:40:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:40:57.063386 | orchestrator | 2025-05-25 01:40:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:40:57.064874 | orchestrator | 2025-05-25 01:40:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:40:57.064913 | orchestrator | 2025-05-25 01:40:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:00.109480 | orchestrator | 2025-05-25 01:41:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:00.109579 | orchestrator | 2025-05-25 01:41:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:00.109594 | orchestrator | 2025-05-25 01:41:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:00.109607 | orchestrator | 2025-05-25 01:41:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:03.162328 | orchestrator | 2025-05-25 01:41:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:03.163584 | orchestrator | 2025-05-25 01:41:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:03.165438 | orchestrator | 2025-05-25 01:41:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:03.165473 | orchestrator | 2025-05-25 01:41:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:06.210266 | orchestrator | 2025-05-25 01:41:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:06.211668 | orchestrator | 2025-05-25 01:41:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:06.213591 | orchestrator | 2025-05-25 01:41:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:06.213619 | orchestrator | 2025-05-25 01:41:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:09.264839 | orchestrator | 2025-05-25 01:41:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:09.267363 | orchestrator | 2025-05-25 01:41:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:09.269739 | orchestrator | 2025-05-25 01:41:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:09.269838 | orchestrator | 2025-05-25 01:41:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:12.318859 | orchestrator | 2025-05-25 01:41:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:12.321085 | orchestrator | 2025-05-25 01:41:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:12.322555 | orchestrator | 2025-05-25 01:41:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:12.322617 | orchestrator | 2025-05-25 01:41:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:15.377085 | orchestrator | 2025-05-25 01:41:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:15.379433 | orchestrator | 2025-05-25 01:41:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:15.381551 | orchestrator | 2025-05-25 01:41:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:15.381575 | orchestrator | 2025-05-25 01:41:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:18.442840 | orchestrator | 2025-05-25 01:41:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:18.446713 | orchestrator | 2025-05-25 01:41:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:18.448615 | orchestrator | 2025-05-25 01:41:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:18.448707 | orchestrator | 2025-05-25 01:41:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:21.500412 | orchestrator | 2025-05-25 01:41:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:21.502157 | orchestrator | 2025-05-25 01:41:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:21.503809 | orchestrator | 2025-05-25 01:41:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:21.503847 | orchestrator | 2025-05-25 01:41:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:24.557332 | orchestrator | 2025-05-25 01:41:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:24.558783 | orchestrator | 2025-05-25 01:41:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:24.560377 | orchestrator | 2025-05-25 01:41:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:24.560404 | orchestrator | 2025-05-25 01:41:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:27.611225 | orchestrator | 2025-05-25 01:41:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:27.613310 | orchestrator | 2025-05-25 01:41:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:27.615372 | orchestrator | 2025-05-25 01:41:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:27.615473 | orchestrator | 2025-05-25 01:41:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:30.668542 | orchestrator | 2025-05-25 01:41:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:30.670209 | orchestrator | 2025-05-25 01:41:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:30.673593 | orchestrator | 2025-05-25 01:41:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:30.673659 | orchestrator | 2025-05-25 01:41:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:33.724061 | orchestrator | 2025-05-25 01:41:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:33.725682 | orchestrator | 2025-05-25 01:41:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:33.727229 | orchestrator | 2025-05-25 01:41:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:33.727272 | orchestrator | 2025-05-25 01:41:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:36.778815 | orchestrator | 2025-05-25 01:41:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:36.782216 | orchestrator | 2025-05-25 01:41:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:36.783866 | orchestrator | 2025-05-25 01:41:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:36.783891 | orchestrator | 2025-05-25 01:41:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:39.832982 | orchestrator | 2025-05-25 01:41:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:39.834606 | orchestrator | 2025-05-25 01:41:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:39.837821 | orchestrator | 2025-05-25 01:41:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:39.838080 | orchestrator | 2025-05-25 01:41:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:42.889191 | orchestrator | 2025-05-25 01:41:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:42.892550 | orchestrator | 2025-05-25 01:41:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:42.894148 | orchestrator | 2025-05-25 01:41:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:42.894189 | orchestrator | 2025-05-25 01:41:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:45.951487 | orchestrator | 2025-05-25 01:41:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:45.952952 | orchestrator | 2025-05-25 01:41:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:45.954427 | orchestrator | 2025-05-25 01:41:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:45.954471 | orchestrator | 2025-05-25 01:41:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:48.999354 | orchestrator | 2025-05-25 01:41:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:49.000672 | orchestrator | 2025-05-25 01:41:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:49.003764 | orchestrator | 2025-05-25 01:41:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:49.003794 | orchestrator | 2025-05-25 01:41:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:52.054257 | orchestrator | 2025-05-25 01:41:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:52.054998 | orchestrator | 2025-05-25 01:41:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:52.057101 | orchestrator | 2025-05-25 01:41:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:52.057130 | orchestrator | 2025-05-25 01:41:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:55.108342 | orchestrator | 2025-05-25 01:41:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:55.109556 | orchestrator | 2025-05-25 01:41:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:55.110906 | orchestrator | 2025-05-25 01:41:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:55.110948 | orchestrator | 2025-05-25 01:41:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:41:58.161549 | orchestrator | 2025-05-25 01:41:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:41:58.163150 | orchestrator | 2025-05-25 01:41:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:41:58.164860 | orchestrator | 2025-05-25 01:41:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:41:58.164898 | orchestrator | 2025-05-25 01:41:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:01.217229 | orchestrator | 2025-05-25 01:42:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:01.218402 | orchestrator | 2025-05-25 01:42:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:01.219623 | orchestrator | 2025-05-25 01:42:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:01.219665 | orchestrator | 2025-05-25 01:42:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:04.268375 | orchestrator | 2025-05-25 01:42:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:04.269396 | orchestrator | 2025-05-25 01:42:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:04.271838 | orchestrator | 2025-05-25 01:42:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:04.271880 | orchestrator | 2025-05-25 01:42:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:07.329402 | orchestrator | 2025-05-25 01:42:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:07.330265 | orchestrator | 2025-05-25 01:42:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:07.332063 | orchestrator | 2025-05-25 01:42:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:07.332138 | orchestrator | 2025-05-25 01:42:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:10.384542 | orchestrator | 2025-05-25 01:42:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:10.384732 | orchestrator | 2025-05-25 01:42:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:10.386093 | orchestrator | 2025-05-25 01:42:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:10.386185 | orchestrator | 2025-05-25 01:42:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:13.435380 | orchestrator | 2025-05-25 01:42:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:13.436278 | orchestrator | 2025-05-25 01:42:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:13.437873 | orchestrator | 2025-05-25 01:42:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:13.437913 | orchestrator | 2025-05-25 01:42:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:16.482323 | orchestrator | 2025-05-25 01:42:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:16.483362 | orchestrator | 2025-05-25 01:42:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:16.484898 | orchestrator | 2025-05-25 01:42:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:16.484941 | orchestrator | 2025-05-25 01:42:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:19.536415 | orchestrator | 2025-05-25 01:42:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:19.536551 | orchestrator | 2025-05-25 01:42:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:19.539299 | orchestrator | 2025-05-25 01:42:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:19.539347 | orchestrator | 2025-05-25 01:42:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:22.584141 | orchestrator | 2025-05-25 01:42:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:22.585245 | orchestrator | 2025-05-25 01:42:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:22.587873 | orchestrator | 2025-05-25 01:42:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:22.587910 | orchestrator | 2025-05-25 01:42:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:25.634546 | orchestrator | 2025-05-25 01:42:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:25.635338 | orchestrator | 2025-05-25 01:42:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:25.637267 | orchestrator | 2025-05-25 01:42:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:25.637300 | orchestrator | 2025-05-25 01:42:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:28.685947 | orchestrator | 2025-05-25 01:42:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:28.687337 | orchestrator | 2025-05-25 01:42:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:28.689660 | orchestrator | 2025-05-25 01:42:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:28.689983 | orchestrator | 2025-05-25 01:42:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:31.736733 | orchestrator | 2025-05-25 01:42:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:31.737287 | orchestrator | 2025-05-25 01:42:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:31.738525 | orchestrator | 2025-05-25 01:42:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:31.738621 | orchestrator | 2025-05-25 01:42:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:34.786675 | orchestrator | 2025-05-25 01:42:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:34.787993 | orchestrator | 2025-05-25 01:42:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:34.789453 | orchestrator | 2025-05-25 01:42:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:34.789661 | orchestrator | 2025-05-25 01:42:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:37.842552 | orchestrator | 2025-05-25 01:42:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:37.844492 | orchestrator | 2025-05-25 01:42:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:37.846288 | orchestrator | 2025-05-25 01:42:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:37.846382 | orchestrator | 2025-05-25 01:42:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:40.895751 | orchestrator | 2025-05-25 01:42:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:40.897261 | orchestrator | 2025-05-25 01:42:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:40.898669 | orchestrator | 2025-05-25 01:42:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:40.898710 | orchestrator | 2025-05-25 01:42:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:43.946748 | orchestrator | 2025-05-25 01:42:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:43.947988 | orchestrator | 2025-05-25 01:42:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:43.949199 | orchestrator | 2025-05-25 01:42:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:43.949224 | orchestrator | 2025-05-25 01:42:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:47.010605 | orchestrator | 2025-05-25 01:42:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:47.012215 | orchestrator | 2025-05-25 01:42:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:47.017281 | orchestrator | 2025-05-25 01:42:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:47.017342 | orchestrator | 2025-05-25 01:42:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:50.062949 | orchestrator | 2025-05-25 01:42:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:50.063513 | orchestrator | 2025-05-25 01:42:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:50.064256 | orchestrator | 2025-05-25 01:42:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:50.064286 | orchestrator | 2025-05-25 01:42:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:53.116594 | orchestrator | 2025-05-25 01:42:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:53.118182 | orchestrator | 2025-05-25 01:42:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:53.119471 | orchestrator | 2025-05-25 01:42:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:53.119502 | orchestrator | 2025-05-25 01:42:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:56.169939 | orchestrator | 2025-05-25 01:42:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:56.170786 | orchestrator | 2025-05-25 01:42:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:56.172186 | orchestrator | 2025-05-25 01:42:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:56.172380 | orchestrator | 2025-05-25 01:42:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:42:59.218129 | orchestrator | 2025-05-25 01:42:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:42:59.220144 | orchestrator | 2025-05-25 01:42:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:42:59.222274 | orchestrator | 2025-05-25 01:42:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:42:59.222302 | orchestrator | 2025-05-25 01:42:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:02.266633 | orchestrator | 2025-05-25 01:43:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:02.266981 | orchestrator | 2025-05-25 01:43:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:02.268391 | orchestrator | 2025-05-25 01:43:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:02.268448 | orchestrator | 2025-05-25 01:43:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:05.309849 | orchestrator | 2025-05-25 01:43:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:05.311617 | orchestrator | 2025-05-25 01:43:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:05.314287 | orchestrator | 2025-05-25 01:43:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:05.314326 | orchestrator | 2025-05-25 01:43:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:08.366622 | orchestrator | 2025-05-25 01:43:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:08.368085 | orchestrator | 2025-05-25 01:43:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:08.370168 | orchestrator | 2025-05-25 01:43:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:08.370219 | orchestrator | 2025-05-25 01:43:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:11.426586 | orchestrator | 2025-05-25 01:43:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:11.430252 | orchestrator | 2025-05-25 01:43:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:11.432172 | orchestrator | 2025-05-25 01:43:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:11.432215 | orchestrator | 2025-05-25 01:43:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:14.485085 | orchestrator | 2025-05-25 01:43:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:14.487154 | orchestrator | 2025-05-25 01:43:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:14.489145 | orchestrator | 2025-05-25 01:43:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:14.489224 | orchestrator | 2025-05-25 01:43:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:17.543283 | orchestrator | 2025-05-25 01:43:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:17.544584 | orchestrator | 2025-05-25 01:43:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:17.546167 | orchestrator | 2025-05-25 01:43:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:17.546201 | orchestrator | 2025-05-25 01:43:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:20.597467 | orchestrator | 2025-05-25 01:43:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:20.599339 | orchestrator | 2025-05-25 01:43:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:20.601986 | orchestrator | 2025-05-25 01:43:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:20.602171 | orchestrator | 2025-05-25 01:43:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:23.646770 | orchestrator | 2025-05-25 01:43:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:23.648070 | orchestrator | 2025-05-25 01:43:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:23.649435 | orchestrator | 2025-05-25 01:43:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:23.650547 | orchestrator | 2025-05-25 01:43:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:26.703325 | orchestrator | 2025-05-25 01:43:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:26.704655 | orchestrator | 2025-05-25 01:43:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:26.706461 | orchestrator | 2025-05-25 01:43:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:26.706492 | orchestrator | 2025-05-25 01:43:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:29.759009 | orchestrator | 2025-05-25 01:43:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:29.760616 | orchestrator | 2025-05-25 01:43:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:29.762909 | orchestrator | 2025-05-25 01:43:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:29.762939 | orchestrator | 2025-05-25 01:43:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:32.822535 | orchestrator | 2025-05-25 01:43:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:32.823645 | orchestrator | 2025-05-25 01:43:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:32.826694 | orchestrator | 2025-05-25 01:43:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:32.826838 | orchestrator | 2025-05-25 01:43:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:35.880596 | orchestrator | 2025-05-25 01:43:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:35.884197 | orchestrator | 2025-05-25 01:43:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:35.887501 | orchestrator | 2025-05-25 01:43:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:35.887519 | orchestrator | 2025-05-25 01:43:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:38.938918 | orchestrator | 2025-05-25 01:43:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:38.940073 | orchestrator | 2025-05-25 01:43:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:38.941698 | orchestrator | 2025-05-25 01:43:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:38.941739 | orchestrator | 2025-05-25 01:43:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:41.990866 | orchestrator | 2025-05-25 01:43:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:41.992586 | orchestrator | 2025-05-25 01:43:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:41.995283 | orchestrator | 2025-05-25 01:43:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:41.995318 | orchestrator | 2025-05-25 01:43:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:45.048962 | orchestrator | 2025-05-25 01:43:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:45.050278 | orchestrator | 2025-05-25 01:43:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:45.051689 | orchestrator | 2025-05-25 01:43:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:45.051767 | orchestrator | 2025-05-25 01:43:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:48.111430 | orchestrator | 2025-05-25 01:43:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:48.112775 | orchestrator | 2025-05-25 01:43:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:48.114581 | orchestrator | 2025-05-25 01:43:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:48.114666 | orchestrator | 2025-05-25 01:43:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:51.172177 | orchestrator | 2025-05-25 01:43:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:51.173817 | orchestrator | 2025-05-25 01:43:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:51.175157 | orchestrator | 2025-05-25 01:43:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:51.175202 | orchestrator | 2025-05-25 01:43:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:54.217818 | orchestrator | 2025-05-25 01:43:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:54.220099 | orchestrator | 2025-05-25 01:43:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:54.222243 | orchestrator | 2025-05-25 01:43:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:54.222421 | orchestrator | 2025-05-25 01:43:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:43:57.269353 | orchestrator | 2025-05-25 01:43:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:43:57.271338 | orchestrator | 2025-05-25 01:43:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:43:57.272831 | orchestrator | 2025-05-25 01:43:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:43:57.272877 | orchestrator | 2025-05-25 01:43:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:00.316630 | orchestrator | 2025-05-25 01:44:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:00.317658 | orchestrator | 2025-05-25 01:44:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:00.320216 | orchestrator | 2025-05-25 01:44:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:00.320262 | orchestrator | 2025-05-25 01:44:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:03.376530 | orchestrator | 2025-05-25 01:44:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:03.378372 | orchestrator | 2025-05-25 01:44:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:03.381495 | orchestrator | 2025-05-25 01:44:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:03.381538 | orchestrator | 2025-05-25 01:44:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:06.436063 | orchestrator | 2025-05-25 01:44:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:06.437541 | orchestrator | 2025-05-25 01:44:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:06.441470 | orchestrator | 2025-05-25 01:44:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:06.441558 | orchestrator | 2025-05-25 01:44:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:09.492212 | orchestrator | 2025-05-25 01:44:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:09.494426 | orchestrator | 2025-05-25 01:44:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:09.498638 | orchestrator | 2025-05-25 01:44:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:09.498682 | orchestrator | 2025-05-25 01:44:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:12.551116 | orchestrator | 2025-05-25 01:44:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:12.554710 | orchestrator | 2025-05-25 01:44:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:12.557407 | orchestrator | 2025-05-25 01:44:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:12.557451 | orchestrator | 2025-05-25 01:44:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:15.609541 | orchestrator | 2025-05-25 01:44:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:15.613376 | orchestrator | 2025-05-25 01:44:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:15.615572 | orchestrator | 2025-05-25 01:44:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:15.615604 | orchestrator | 2025-05-25 01:44:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:18.667491 | orchestrator | 2025-05-25 01:44:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:18.670270 | orchestrator | 2025-05-25 01:44:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:18.672307 | orchestrator | 2025-05-25 01:44:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:18.672382 | orchestrator | 2025-05-25 01:44:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:21.718465 | orchestrator | 2025-05-25 01:44:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:21.718854 | orchestrator | 2025-05-25 01:44:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:21.720884 | orchestrator | 2025-05-25 01:44:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:21.720922 | orchestrator | 2025-05-25 01:44:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:24.776789 | orchestrator | 2025-05-25 01:44:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:24.776900 | orchestrator | 2025-05-25 01:44:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:24.778960 | orchestrator | 2025-05-25 01:44:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:24.778990 | orchestrator | 2025-05-25 01:44:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:27.829958 | orchestrator | 2025-05-25 01:44:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:27.831351 | orchestrator | 2025-05-25 01:44:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:27.833837 | orchestrator | 2025-05-25 01:44:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:27.833873 | orchestrator | 2025-05-25 01:44:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:30.891383 | orchestrator | 2025-05-25 01:44:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:30.892738 | orchestrator | 2025-05-25 01:44:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:30.894779 | orchestrator | 2025-05-25 01:44:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:30.894803 | orchestrator | 2025-05-25 01:44:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:33.947405 | orchestrator | 2025-05-25 01:44:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:33.948966 | orchestrator | 2025-05-25 01:44:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:33.950522 | orchestrator | 2025-05-25 01:44:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:33.950613 | orchestrator | 2025-05-25 01:44:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:37.010105 | orchestrator | 2025-05-25 01:44:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:37.010690 | orchestrator | 2025-05-25 01:44:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:37.011825 | orchestrator | 2025-05-25 01:44:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:37.012451 | orchestrator | 2025-05-25 01:44:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:40.065037 | orchestrator | 2025-05-25 01:44:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:40.066146 | orchestrator | 2025-05-25 01:44:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:40.067949 | orchestrator | 2025-05-25 01:44:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:40.067978 | orchestrator | 2025-05-25 01:44:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:43.116782 | orchestrator | 2025-05-25 01:44:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:43.118993 | orchestrator | 2025-05-25 01:44:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:43.122376 | orchestrator | 2025-05-25 01:44:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:43.122484 | orchestrator | 2025-05-25 01:44:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:46.169009 | orchestrator | 2025-05-25 01:44:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:46.170702 | orchestrator | 2025-05-25 01:44:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:46.173163 | orchestrator | 2025-05-25 01:44:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:46.173228 | orchestrator | 2025-05-25 01:44:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:49.224642 | orchestrator | 2025-05-25 01:44:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:49.225699 | orchestrator | 2025-05-25 01:44:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:49.226656 | orchestrator | 2025-05-25 01:44:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:49.226686 | orchestrator | 2025-05-25 01:44:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:52.274927 | orchestrator | 2025-05-25 01:44:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:52.275703 | orchestrator | 2025-05-25 01:44:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:52.276933 | orchestrator | 2025-05-25 01:44:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:52.276962 | orchestrator | 2025-05-25 01:44:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:55.321836 | orchestrator | 2025-05-25 01:44:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:55.323770 | orchestrator | 2025-05-25 01:44:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:55.325517 | orchestrator | 2025-05-25 01:44:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:55.325569 | orchestrator | 2025-05-25 01:44:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:44:58.375762 | orchestrator | 2025-05-25 01:44:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:44:58.377634 | orchestrator | 2025-05-25 01:44:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:44:58.379722 | orchestrator | 2025-05-25 01:44:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:44:58.379732 | orchestrator | 2025-05-25 01:44:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:01.432027 | orchestrator | 2025-05-25 01:45:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:01.433559 | orchestrator | 2025-05-25 01:45:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:01.435563 | orchestrator | 2025-05-25 01:45:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:01.435608 | orchestrator | 2025-05-25 01:45:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:04.487678 | orchestrator | 2025-05-25 01:45:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:04.488883 | orchestrator | 2025-05-25 01:45:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:04.490734 | orchestrator | 2025-05-25 01:45:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:04.490767 | orchestrator | 2025-05-25 01:45:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:07.543334 | orchestrator | 2025-05-25 01:45:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:07.545329 | orchestrator | 2025-05-25 01:45:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:07.546898 | orchestrator | 2025-05-25 01:45:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:07.546948 | orchestrator | 2025-05-25 01:45:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:10.594491 | orchestrator | 2025-05-25 01:45:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:10.595612 | orchestrator | 2025-05-25 01:45:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:10.597200 | orchestrator | 2025-05-25 01:45:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:10.597224 | orchestrator | 2025-05-25 01:45:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:13.648111 | orchestrator | 2025-05-25 01:45:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:13.649694 | orchestrator | 2025-05-25 01:45:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:13.651209 | orchestrator | 2025-05-25 01:45:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:13.651263 | orchestrator | 2025-05-25 01:45:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:16.703457 | orchestrator | 2025-05-25 01:45:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:16.704618 | orchestrator | 2025-05-25 01:45:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:16.705418 | orchestrator | 2025-05-25 01:45:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:16.705446 | orchestrator | 2025-05-25 01:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:19.755085 | orchestrator | 2025-05-25 01:45:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:19.756079 | orchestrator | 2025-05-25 01:45:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:19.758214 | orchestrator | 2025-05-25 01:45:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:19.758279 | orchestrator | 2025-05-25 01:45:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:22.812052 | orchestrator | 2025-05-25 01:45:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:22.813489 | orchestrator | 2025-05-25 01:45:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:22.815274 | orchestrator | 2025-05-25 01:45:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:22.815419 | orchestrator | 2025-05-25 01:45:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:25.871316 | orchestrator | 2025-05-25 01:45:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:25.871675 | orchestrator | 2025-05-25 01:45:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:25.873428 | orchestrator | 2025-05-25 01:45:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:25.873455 | orchestrator | 2025-05-25 01:45:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:28.923532 | orchestrator | 2025-05-25 01:45:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:28.924578 | orchestrator | 2025-05-25 01:45:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:28.925732 | orchestrator | 2025-05-25 01:45:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:28.925753 | orchestrator | 2025-05-25 01:45:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:31.974844 | orchestrator | 2025-05-25 01:45:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:31.975967 | orchestrator | 2025-05-25 01:45:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:31.978097 | orchestrator | 2025-05-25 01:45:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:31.978127 | orchestrator | 2025-05-25 01:45:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:35.029800 | orchestrator | 2025-05-25 01:45:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:35.030540 | orchestrator | 2025-05-25 01:45:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:35.031643 | orchestrator | 2025-05-25 01:45:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:35.031696 | orchestrator | 2025-05-25 01:45:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:38.084318 | orchestrator | 2025-05-25 01:45:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:38.085797 | orchestrator | 2025-05-25 01:45:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:38.089405 | orchestrator | 2025-05-25 01:45:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:38.089453 | orchestrator | 2025-05-25 01:45:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:41.145781 | orchestrator | 2025-05-25 01:45:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:41.147126 | orchestrator | 2025-05-25 01:45:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:41.148867 | orchestrator | 2025-05-25 01:45:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:41.148972 | orchestrator | 2025-05-25 01:45:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:44.194539 | orchestrator | 2025-05-25 01:45:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:44.195841 | orchestrator | 2025-05-25 01:45:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:44.198649 | orchestrator | 2025-05-25 01:45:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:44.198678 | orchestrator | 2025-05-25 01:45:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:47.252870 | orchestrator | 2025-05-25 01:45:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:47.253923 | orchestrator | 2025-05-25 01:45:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:47.256136 | orchestrator | 2025-05-25 01:45:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:47.256316 | orchestrator | 2025-05-25 01:45:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:50.311129 | orchestrator | 2025-05-25 01:45:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:50.312142 | orchestrator | 2025-05-25 01:45:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:50.313662 | orchestrator | 2025-05-25 01:45:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:50.313673 | orchestrator | 2025-05-25 01:45:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:53.363245 | orchestrator | 2025-05-25 01:45:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:53.364673 | orchestrator | 2025-05-25 01:45:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:53.365629 | orchestrator | 2025-05-25 01:45:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:53.365661 | orchestrator | 2025-05-25 01:45:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:56.419340 | orchestrator | 2025-05-25 01:45:56 | INFO  | Task f86a384f-72a4-4ee2-82c1-822086db2b70 is in state STARTED 2025-05-25 01:45:56.419810 | orchestrator | 2025-05-25 01:45:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:56.421840 | orchestrator | 2025-05-25 01:45:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:56.423420 | orchestrator | 2025-05-25 01:45:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:56.423567 | orchestrator | 2025-05-25 01:45:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:45:59.476075 | orchestrator | 2025-05-25 01:45:59 | INFO  | Task f86a384f-72a4-4ee2-82c1-822086db2b70 is in state STARTED 2025-05-25 01:45:59.477262 | orchestrator | 2025-05-25 01:45:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:45:59.479393 | orchestrator | 2025-05-25 01:45:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:45:59.481946 | orchestrator | 2025-05-25 01:45:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:45:59.482536 | orchestrator | 2025-05-25 01:45:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:02.538589 | orchestrator | 2025-05-25 01:46:02 | INFO  | Task f86a384f-72a4-4ee2-82c1-822086db2b70 is in state STARTED 2025-05-25 01:46:02.539790 | orchestrator | 2025-05-25 01:46:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:02.542432 | orchestrator | 2025-05-25 01:46:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:02.544698 | orchestrator | 2025-05-25 01:46:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:02.545397 | orchestrator | 2025-05-25 01:46:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:05.604827 | orchestrator | 2025-05-25 01:46:05 | INFO  | Task f86a384f-72a4-4ee2-82c1-822086db2b70 is in state SUCCESS 2025-05-25 01:46:05.605885 | orchestrator | 2025-05-25 01:46:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:05.607146 | orchestrator | 2025-05-25 01:46:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:05.609108 | orchestrator | 2025-05-25 01:46:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:05.609543 | orchestrator | 2025-05-25 01:46:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:08.661596 | orchestrator | 2025-05-25 01:46:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:08.662703 | orchestrator | 2025-05-25 01:46:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:08.664453 | orchestrator | 2025-05-25 01:46:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:08.664487 | orchestrator | 2025-05-25 01:46:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:11.714883 | orchestrator | 2025-05-25 01:46:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:11.716095 | orchestrator | 2025-05-25 01:46:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:11.717816 | orchestrator | 2025-05-25 01:46:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:11.717889 | orchestrator | 2025-05-25 01:46:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:14.763757 | orchestrator | 2025-05-25 01:46:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:14.764730 | orchestrator | 2025-05-25 01:46:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:14.767492 | orchestrator | 2025-05-25 01:46:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:14.767563 | orchestrator | 2025-05-25 01:46:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:17.815561 | orchestrator | 2025-05-25 01:46:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:17.816348 | orchestrator | 2025-05-25 01:46:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:17.818280 | orchestrator | 2025-05-25 01:46:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:17.818334 | orchestrator | 2025-05-25 01:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:20.866647 | orchestrator | 2025-05-25 01:46:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:20.868344 | orchestrator | 2025-05-25 01:46:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:20.869482 | orchestrator | 2025-05-25 01:46:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:20.869509 | orchestrator | 2025-05-25 01:46:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:23.916351 | orchestrator | 2025-05-25 01:46:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:23.918212 | orchestrator | 2025-05-25 01:46:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:23.920837 | orchestrator | 2025-05-25 01:46:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:23.920877 | orchestrator | 2025-05-25 01:46:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:26.971241 | orchestrator | 2025-05-25 01:46:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:26.971809 | orchestrator | 2025-05-25 01:46:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:26.973546 | orchestrator | 2025-05-25 01:46:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:26.973578 | orchestrator | 2025-05-25 01:46:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:30.025113 | orchestrator | 2025-05-25 01:46:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:30.026626 | orchestrator | 2025-05-25 01:46:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:30.029060 | orchestrator | 2025-05-25 01:46:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:30.029214 | orchestrator | 2025-05-25 01:46:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:33.074331 | orchestrator | 2025-05-25 01:46:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:33.075124 | orchestrator | 2025-05-25 01:46:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:33.077354 | orchestrator | 2025-05-25 01:46:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:33.077407 | orchestrator | 2025-05-25 01:46:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:36.131029 | orchestrator | 2025-05-25 01:46:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:36.132486 | orchestrator | 2025-05-25 01:46:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:36.134081 | orchestrator | 2025-05-25 01:46:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:36.134114 | orchestrator | 2025-05-25 01:46:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:39.190279 | orchestrator | 2025-05-25 01:46:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:39.191463 | orchestrator | 2025-05-25 01:46:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:39.192923 | orchestrator | 2025-05-25 01:46:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:39.192949 | orchestrator | 2025-05-25 01:46:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:42.242339 | orchestrator | 2025-05-25 01:46:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:42.243548 | orchestrator | 2025-05-25 01:46:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:42.245447 | orchestrator | 2025-05-25 01:46:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:42.245622 | orchestrator | 2025-05-25 01:46:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:45.298831 | orchestrator | 2025-05-25 01:46:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:45.300108 | orchestrator | 2025-05-25 01:46:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:45.302236 | orchestrator | 2025-05-25 01:46:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:45.302551 | orchestrator | 2025-05-25 01:46:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:48.352901 | orchestrator | 2025-05-25 01:46:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:48.355069 | orchestrator | 2025-05-25 01:46:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:48.357374 | orchestrator | 2025-05-25 01:46:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:48.357874 | orchestrator | 2025-05-25 01:46:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:51.406003 | orchestrator | 2025-05-25 01:46:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:51.409596 | orchestrator | 2025-05-25 01:46:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:51.410147 | orchestrator | 2025-05-25 01:46:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:51.410356 | orchestrator | 2025-05-25 01:46:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:54.460560 | orchestrator | 2025-05-25 01:46:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:54.461927 | orchestrator | 2025-05-25 01:46:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:54.463404 | orchestrator | 2025-05-25 01:46:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:54.463443 | orchestrator | 2025-05-25 01:46:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:46:57.508468 | orchestrator | 2025-05-25 01:46:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:46:57.510089 | orchestrator | 2025-05-25 01:46:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:46:57.512244 | orchestrator | 2025-05-25 01:46:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:46:57.512349 | orchestrator | 2025-05-25 01:46:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:00.554812 | orchestrator | 2025-05-25 01:47:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:00.555080 | orchestrator | 2025-05-25 01:47:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:00.556473 | orchestrator | 2025-05-25 01:47:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:00.556511 | orchestrator | 2025-05-25 01:47:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:03.601223 | orchestrator | 2025-05-25 01:47:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:03.602476 | orchestrator | 2025-05-25 01:47:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:03.604803 | orchestrator | 2025-05-25 01:47:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:03.604823 | orchestrator | 2025-05-25 01:47:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:06.654393 | orchestrator | 2025-05-25 01:47:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:06.655806 | orchestrator | 2025-05-25 01:47:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:06.657746 | orchestrator | 2025-05-25 01:47:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:06.657777 | orchestrator | 2025-05-25 01:47:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:09.711425 | orchestrator | 2025-05-25 01:47:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:09.712572 | orchestrator | 2025-05-25 01:47:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:09.714601 | orchestrator | 2025-05-25 01:47:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:09.714637 | orchestrator | 2025-05-25 01:47:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:12.765043 | orchestrator | 2025-05-25 01:47:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:12.766241 | orchestrator | 2025-05-25 01:47:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:12.767942 | orchestrator | 2025-05-25 01:47:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:12.767972 | orchestrator | 2025-05-25 01:47:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:15.819940 | orchestrator | 2025-05-25 01:47:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:15.821685 | orchestrator | 2025-05-25 01:47:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:15.822809 | orchestrator | 2025-05-25 01:47:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:15.822837 | orchestrator | 2025-05-25 01:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:18.876991 | orchestrator | 2025-05-25 01:47:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:18.879143 | orchestrator | 2025-05-25 01:47:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:18.881031 | orchestrator | 2025-05-25 01:47:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:18.881085 | orchestrator | 2025-05-25 01:47:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:21.935869 | orchestrator | 2025-05-25 01:47:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:21.937215 | orchestrator | 2025-05-25 01:47:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:21.939045 | orchestrator | 2025-05-25 01:47:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:21.939077 | orchestrator | 2025-05-25 01:47:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:24.990523 | orchestrator | 2025-05-25 01:47:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:24.991141 | orchestrator | 2025-05-25 01:47:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:24.992821 | orchestrator | 2025-05-25 01:47:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:24.992849 | orchestrator | 2025-05-25 01:47:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:28.042609 | orchestrator | 2025-05-25 01:47:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:28.043743 | orchestrator | 2025-05-25 01:47:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:28.047121 | orchestrator | 2025-05-25 01:47:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:28.047592 | orchestrator | 2025-05-25 01:47:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:31.097247 | orchestrator | 2025-05-25 01:47:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:31.100044 | orchestrator | 2025-05-25 01:47:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:31.102197 | orchestrator | 2025-05-25 01:47:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:31.102295 | orchestrator | 2025-05-25 01:47:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:34.157951 | orchestrator | 2025-05-25 01:47:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:34.158923 | orchestrator | 2025-05-25 01:47:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:34.162406 | orchestrator | 2025-05-25 01:47:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:34.162435 | orchestrator | 2025-05-25 01:47:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:37.215413 | orchestrator | 2025-05-25 01:47:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:37.217217 | orchestrator | 2025-05-25 01:47:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:37.220986 | orchestrator | 2025-05-25 01:47:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:37.221043 | orchestrator | 2025-05-25 01:47:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:40.277720 | orchestrator | 2025-05-25 01:47:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:40.282175 | orchestrator | 2025-05-25 01:47:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:40.283550 | orchestrator | 2025-05-25 01:47:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:40.283583 | orchestrator | 2025-05-25 01:47:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:43.342267 | orchestrator | 2025-05-25 01:47:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:43.343308 | orchestrator | 2025-05-25 01:47:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:43.344443 | orchestrator | 2025-05-25 01:47:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:43.344501 | orchestrator | 2025-05-25 01:47:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:46.391213 | orchestrator | 2025-05-25 01:47:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:46.392579 | orchestrator | 2025-05-25 01:47:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:46.393888 | orchestrator | 2025-05-25 01:47:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:46.393946 | orchestrator | 2025-05-25 01:47:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:49.443083 | orchestrator | 2025-05-25 01:47:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:49.444986 | orchestrator | 2025-05-25 01:47:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:49.445025 | orchestrator | 2025-05-25 01:47:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:49.445038 | orchestrator | 2025-05-25 01:47:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:52.506127 | orchestrator | 2025-05-25 01:47:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:52.507223 | orchestrator | 2025-05-25 01:47:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:52.510105 | orchestrator | 2025-05-25 01:47:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:52.510136 | orchestrator | 2025-05-25 01:47:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:55.571304 | orchestrator | 2025-05-25 01:47:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:55.574110 | orchestrator | 2025-05-25 01:47:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:55.576390 | orchestrator | 2025-05-25 01:47:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:55.576423 | orchestrator | 2025-05-25 01:47:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:47:58.632808 | orchestrator | 2025-05-25 01:47:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:47:58.634857 | orchestrator | 2025-05-25 01:47:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:47:58.637486 | orchestrator | 2025-05-25 01:47:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:47:58.637763 | orchestrator | 2025-05-25 01:47:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:01.687982 | orchestrator | 2025-05-25 01:48:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:01.688529 | orchestrator | 2025-05-25 01:48:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:01.691179 | orchestrator | 2025-05-25 01:48:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:01.691207 | orchestrator | 2025-05-25 01:48:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:04.744775 | orchestrator | 2025-05-25 01:48:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:04.746009 | orchestrator | 2025-05-25 01:48:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:04.747385 | orchestrator | 2025-05-25 01:48:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:04.747445 | orchestrator | 2025-05-25 01:48:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:07.798082 | orchestrator | 2025-05-25 01:48:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:07.798920 | orchestrator | 2025-05-25 01:48:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:07.802286 | orchestrator | 2025-05-25 01:48:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:07.802356 | orchestrator | 2025-05-25 01:48:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:10.850085 | orchestrator | 2025-05-25 01:48:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:10.850734 | orchestrator | 2025-05-25 01:48:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:10.852805 | orchestrator | 2025-05-25 01:48:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:10.853696 | orchestrator | 2025-05-25 01:48:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:13.905517 | orchestrator | 2025-05-25 01:48:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:13.908502 | orchestrator | 2025-05-25 01:48:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:13.910520 | orchestrator | 2025-05-25 01:48:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:13.910616 | orchestrator | 2025-05-25 01:48:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:16.961129 | orchestrator | 2025-05-25 01:48:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:16.964677 | orchestrator | 2025-05-25 01:48:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:16.966613 | orchestrator | 2025-05-25 01:48:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:16.966733 | orchestrator | 2025-05-25 01:48:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:20.020815 | orchestrator | 2025-05-25 01:48:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:20.021755 | orchestrator | 2025-05-25 01:48:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:20.023496 | orchestrator | 2025-05-25 01:48:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:20.023525 | orchestrator | 2025-05-25 01:48:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:23.070431 | orchestrator | 2025-05-25 01:48:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:23.071483 | orchestrator | 2025-05-25 01:48:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:23.073286 | orchestrator | 2025-05-25 01:48:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:23.073381 | orchestrator | 2025-05-25 01:48:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:26.131456 | orchestrator | 2025-05-25 01:48:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:26.133729 | orchestrator | 2025-05-25 01:48:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:26.136472 | orchestrator | 2025-05-25 01:48:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:26.136507 | orchestrator | 2025-05-25 01:48:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:29.191380 | orchestrator | 2025-05-25 01:48:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:29.192882 | orchestrator | 2025-05-25 01:48:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:29.194817 | orchestrator | 2025-05-25 01:48:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:29.195342 | orchestrator | 2025-05-25 01:48:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:32.246321 | orchestrator | 2025-05-25 01:48:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:32.247597 | orchestrator | 2025-05-25 01:48:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:32.249228 | orchestrator | 2025-05-25 01:48:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:32.249261 | orchestrator | 2025-05-25 01:48:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:35.300698 | orchestrator | 2025-05-25 01:48:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:35.301426 | orchestrator | 2025-05-25 01:48:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:35.302926 | orchestrator | 2025-05-25 01:48:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:35.303020 | orchestrator | 2025-05-25 01:48:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:38.355859 | orchestrator | 2025-05-25 01:48:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:38.358718 | orchestrator | 2025-05-25 01:48:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:38.360538 | orchestrator | 2025-05-25 01:48:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:38.360634 | orchestrator | 2025-05-25 01:48:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:41.412202 | orchestrator | 2025-05-25 01:48:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:41.413536 | orchestrator | 2025-05-25 01:48:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:41.414701 | orchestrator | 2025-05-25 01:48:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:41.414732 | orchestrator | 2025-05-25 01:48:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:44.464698 | orchestrator | 2025-05-25 01:48:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:44.466516 | orchestrator | 2025-05-25 01:48:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:44.468161 | orchestrator | 2025-05-25 01:48:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:44.468192 | orchestrator | 2025-05-25 01:48:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:47.515469 | orchestrator | 2025-05-25 01:48:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:47.516865 | orchestrator | 2025-05-25 01:48:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:47.518523 | orchestrator | 2025-05-25 01:48:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:47.518614 | orchestrator | 2025-05-25 01:48:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:50.569333 | orchestrator | 2025-05-25 01:48:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:50.571201 | orchestrator | 2025-05-25 01:48:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:50.573540 | orchestrator | 2025-05-25 01:48:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:50.573705 | orchestrator | 2025-05-25 01:48:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:53.625828 | orchestrator | 2025-05-25 01:48:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:53.626569 | orchestrator | 2025-05-25 01:48:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:53.627741 | orchestrator | 2025-05-25 01:48:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:53.627767 | orchestrator | 2025-05-25 01:48:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:56.673115 | orchestrator | 2025-05-25 01:48:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:56.674092 | orchestrator | 2025-05-25 01:48:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:56.675102 | orchestrator | 2025-05-25 01:48:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:56.675166 | orchestrator | 2025-05-25 01:48:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:48:59.731285 | orchestrator | 2025-05-25 01:48:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:48:59.731787 | orchestrator | 2025-05-25 01:48:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:48:59.733344 | orchestrator | 2025-05-25 01:48:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:48:59.733444 | orchestrator | 2025-05-25 01:48:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:02.781212 | orchestrator | 2025-05-25 01:49:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:02.783385 | orchestrator | 2025-05-25 01:49:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:02.785002 | orchestrator | 2025-05-25 01:49:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:02.785037 | orchestrator | 2025-05-25 01:49:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:05.831863 | orchestrator | 2025-05-25 01:49:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:05.833116 | orchestrator | 2025-05-25 01:49:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:05.834482 | orchestrator | 2025-05-25 01:49:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:05.834585 | orchestrator | 2025-05-25 01:49:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:08.883469 | orchestrator | 2025-05-25 01:49:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:08.884398 | orchestrator | 2025-05-25 01:49:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:08.885867 | orchestrator | 2025-05-25 01:49:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:08.885900 | orchestrator | 2025-05-25 01:49:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:11.934915 | orchestrator | 2025-05-25 01:49:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:11.937016 | orchestrator | 2025-05-25 01:49:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:11.940398 | orchestrator | 2025-05-25 01:49:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:11.940486 | orchestrator | 2025-05-25 01:49:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:14.992388 | orchestrator | 2025-05-25 01:49:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:14.993856 | orchestrator | 2025-05-25 01:49:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:14.995208 | orchestrator | 2025-05-25 01:49:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:14.995238 | orchestrator | 2025-05-25 01:49:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:18.037574 | orchestrator | 2025-05-25 01:49:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:18.037776 | orchestrator | 2025-05-25 01:49:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:18.039025 | orchestrator | 2025-05-25 01:49:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:18.039067 | orchestrator | 2025-05-25 01:49:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:21.089223 | orchestrator | 2025-05-25 01:49:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:21.091052 | orchestrator | 2025-05-25 01:49:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:21.092723 | orchestrator | 2025-05-25 01:49:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:21.092817 | orchestrator | 2025-05-25 01:49:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:24.144115 | orchestrator | 2025-05-25 01:49:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:24.145376 | orchestrator | 2025-05-25 01:49:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:24.146835 | orchestrator | 2025-05-25 01:49:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:24.146862 | orchestrator | 2025-05-25 01:49:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:27.200692 | orchestrator | 2025-05-25 01:49:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:27.201314 | orchestrator | 2025-05-25 01:49:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:27.203018 | orchestrator | 2025-05-25 01:49:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:27.203069 | orchestrator | 2025-05-25 01:49:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:30.256800 | orchestrator | 2025-05-25 01:49:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:30.259333 | orchestrator | 2025-05-25 01:49:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:30.262453 | orchestrator | 2025-05-25 01:49:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:30.263172 | orchestrator | 2025-05-25 01:49:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:33.309212 | orchestrator | 2025-05-25 01:49:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:33.310616 | orchestrator | 2025-05-25 01:49:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:33.311874 | orchestrator | 2025-05-25 01:49:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:33.313373 | orchestrator | 2025-05-25 01:49:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:36.361715 | orchestrator | 2025-05-25 01:49:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:36.362373 | orchestrator | 2025-05-25 01:49:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:36.364429 | orchestrator | 2025-05-25 01:49:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:36.364458 | orchestrator | 2025-05-25 01:49:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:39.420254 | orchestrator | 2025-05-25 01:49:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:39.421351 | orchestrator | 2025-05-25 01:49:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:39.422949 | orchestrator | 2025-05-25 01:49:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:39.423041 | orchestrator | 2025-05-25 01:49:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:42.476087 | orchestrator | 2025-05-25 01:49:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:42.478765 | orchestrator | 2025-05-25 01:49:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:42.481893 | orchestrator | 2025-05-25 01:49:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:42.481925 | orchestrator | 2025-05-25 01:49:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:45.532735 | orchestrator | 2025-05-25 01:49:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:45.533950 | orchestrator | 2025-05-25 01:49:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:45.535220 | orchestrator | 2025-05-25 01:49:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:45.535260 | orchestrator | 2025-05-25 01:49:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:48.585838 | orchestrator | 2025-05-25 01:49:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:48.586554 | orchestrator | 2025-05-25 01:49:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:48.587847 | orchestrator | 2025-05-25 01:49:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:48.587870 | orchestrator | 2025-05-25 01:49:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:51.638957 | orchestrator | 2025-05-25 01:49:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:51.640256 | orchestrator | 2025-05-25 01:49:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:51.642084 | orchestrator | 2025-05-25 01:49:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:51.642128 | orchestrator | 2025-05-25 01:49:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:54.688528 | orchestrator | 2025-05-25 01:49:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:54.689507 | orchestrator | 2025-05-25 01:49:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:54.691062 | orchestrator | 2025-05-25 01:49:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:54.691126 | orchestrator | 2025-05-25 01:49:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:49:57.741683 | orchestrator | 2025-05-25 01:49:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:49:57.742747 | orchestrator | 2025-05-25 01:49:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:49:57.744021 | orchestrator | 2025-05-25 01:49:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:49:57.744032 | orchestrator | 2025-05-25 01:49:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:00.788533 | orchestrator | 2025-05-25 01:50:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:00.789101 | orchestrator | 2025-05-25 01:50:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:00.790765 | orchestrator | 2025-05-25 01:50:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:00.790791 | orchestrator | 2025-05-25 01:50:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:03.839837 | orchestrator | 2025-05-25 01:50:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:03.841004 | orchestrator | 2025-05-25 01:50:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:03.842223 | orchestrator | 2025-05-25 01:50:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:03.842353 | orchestrator | 2025-05-25 01:50:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:06.889824 | orchestrator | 2025-05-25 01:50:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:06.890944 | orchestrator | 2025-05-25 01:50:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:06.892460 | orchestrator | 2025-05-25 01:50:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:06.892551 | orchestrator | 2025-05-25 01:50:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:09.944846 | orchestrator | 2025-05-25 01:50:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:09.945714 | orchestrator | 2025-05-25 01:50:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:09.946792 | orchestrator | 2025-05-25 01:50:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:09.946821 | orchestrator | 2025-05-25 01:50:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:13.002600 | orchestrator | 2025-05-25 01:50:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:13.005837 | orchestrator | 2025-05-25 01:50:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:13.008060 | orchestrator | 2025-05-25 01:50:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:13.008092 | orchestrator | 2025-05-25 01:50:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:16.063266 | orchestrator | 2025-05-25 01:50:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:16.066377 | orchestrator | 2025-05-25 01:50:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:16.068718 | orchestrator | 2025-05-25 01:50:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:16.068858 | orchestrator | 2025-05-25 01:50:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:19.112581 | orchestrator | 2025-05-25 01:50:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:19.113866 | orchestrator | 2025-05-25 01:50:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:19.115422 | orchestrator | 2025-05-25 01:50:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:19.115457 | orchestrator | 2025-05-25 01:50:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:22.158769 | orchestrator | 2025-05-25 01:50:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:22.160470 | orchestrator | 2025-05-25 01:50:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:22.162295 | orchestrator | 2025-05-25 01:50:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:22.162396 | orchestrator | 2025-05-25 01:50:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:25.212734 | orchestrator | 2025-05-25 01:50:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:25.213803 | orchestrator | 2025-05-25 01:50:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:25.215396 | orchestrator | 2025-05-25 01:50:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:25.215492 | orchestrator | 2025-05-25 01:50:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:28.266315 | orchestrator | 2025-05-25 01:50:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:28.268395 | orchestrator | 2025-05-25 01:50:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:28.270336 | orchestrator | 2025-05-25 01:50:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:28.270378 | orchestrator | 2025-05-25 01:50:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:31.319227 | orchestrator | 2025-05-25 01:50:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:31.320748 | orchestrator | 2025-05-25 01:50:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:31.321973 | orchestrator | 2025-05-25 01:50:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:31.322204 | orchestrator | 2025-05-25 01:50:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:34.373751 | orchestrator | 2025-05-25 01:50:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:34.375850 | orchestrator | 2025-05-25 01:50:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:34.377163 | orchestrator | 2025-05-25 01:50:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:34.377260 | orchestrator | 2025-05-25 01:50:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:37.427383 | orchestrator | 2025-05-25 01:50:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:37.429046 | orchestrator | 2025-05-25 01:50:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:37.431290 | orchestrator | 2025-05-25 01:50:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:37.431343 | orchestrator | 2025-05-25 01:50:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:40.484916 | orchestrator | 2025-05-25 01:50:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:40.485650 | orchestrator | 2025-05-25 01:50:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:40.486860 | orchestrator | 2025-05-25 01:50:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:40.486892 | orchestrator | 2025-05-25 01:50:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:43.533475 | orchestrator | 2025-05-25 01:50:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:43.534229 | orchestrator | 2025-05-25 01:50:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:43.536133 | orchestrator | 2025-05-25 01:50:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:43.536264 | orchestrator | 2025-05-25 01:50:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:46.584855 | orchestrator | 2025-05-25 01:50:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:46.586522 | orchestrator | 2025-05-25 01:50:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:46.587909 | orchestrator | 2025-05-25 01:50:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:46.587983 | orchestrator | 2025-05-25 01:50:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:49.637798 | orchestrator | 2025-05-25 01:50:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:49.639072 | orchestrator | 2025-05-25 01:50:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:49.640587 | orchestrator | 2025-05-25 01:50:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:49.640618 | orchestrator | 2025-05-25 01:50:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:52.700493 | orchestrator | 2025-05-25 01:50:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:52.702338 | orchestrator | 2025-05-25 01:50:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:52.703620 | orchestrator | 2025-05-25 01:50:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:52.703649 | orchestrator | 2025-05-25 01:50:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:55.756845 | orchestrator | 2025-05-25 01:50:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:55.758249 | orchestrator | 2025-05-25 01:50:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:55.759818 | orchestrator | 2025-05-25 01:50:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:55.759846 | orchestrator | 2025-05-25 01:50:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:50:58.811977 | orchestrator | 2025-05-25 01:50:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:50:58.814207 | orchestrator | 2025-05-25 01:50:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:50:58.815948 | orchestrator | 2025-05-25 01:50:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:50:58.816064 | orchestrator | 2025-05-25 01:50:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:01.869409 | orchestrator | 2025-05-25 01:51:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:01.869742 | orchestrator | 2025-05-25 01:51:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:01.871104 | orchestrator | 2025-05-25 01:51:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:01.871136 | orchestrator | 2025-05-25 01:51:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:04.917815 | orchestrator | 2025-05-25 01:51:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:04.919345 | orchestrator | 2025-05-25 01:51:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:04.920900 | orchestrator | 2025-05-25 01:51:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:04.920938 | orchestrator | 2025-05-25 01:51:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:07.962468 | orchestrator | 2025-05-25 01:51:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:07.963655 | orchestrator | 2025-05-25 01:51:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:07.965099 | orchestrator | 2025-05-25 01:51:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:07.965125 | orchestrator | 2025-05-25 01:51:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:11.013762 | orchestrator | 2025-05-25 01:51:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:11.015231 | orchestrator | 2025-05-25 01:51:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:11.016876 | orchestrator | 2025-05-25 01:51:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:11.017163 | orchestrator | 2025-05-25 01:51:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:14.064554 | orchestrator | 2025-05-25 01:51:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:14.065654 | orchestrator | 2025-05-25 01:51:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:14.067353 | orchestrator | 2025-05-25 01:51:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:14.067402 | orchestrator | 2025-05-25 01:51:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:17.118372 | orchestrator | 2025-05-25 01:51:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:17.119965 | orchestrator | 2025-05-25 01:51:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:17.122337 | orchestrator | 2025-05-25 01:51:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:17.122382 | orchestrator | 2025-05-25 01:51:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:20.178152 | orchestrator | 2025-05-25 01:51:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:20.181149 | orchestrator | 2025-05-25 01:51:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:20.183444 | orchestrator | 2025-05-25 01:51:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:20.183489 | orchestrator | 2025-05-25 01:51:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:23.231553 | orchestrator | 2025-05-25 01:51:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:23.232899 | orchestrator | 2025-05-25 01:51:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:23.234522 | orchestrator | 2025-05-25 01:51:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:23.234550 | orchestrator | 2025-05-25 01:51:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:26.293151 | orchestrator | 2025-05-25 01:51:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:26.293328 | orchestrator | 2025-05-25 01:51:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:26.293747 | orchestrator | 2025-05-25 01:51:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:26.293828 | orchestrator | 2025-05-25 01:51:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:29.346196 | orchestrator | 2025-05-25 01:51:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:29.347854 | orchestrator | 2025-05-25 01:51:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:29.349647 | orchestrator | 2025-05-25 01:51:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:29.349719 | orchestrator | 2025-05-25 01:51:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:32.394224 | orchestrator | 2025-05-25 01:51:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:32.396292 | orchestrator | 2025-05-25 01:51:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:32.397843 | orchestrator | 2025-05-25 01:51:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:32.397880 | orchestrator | 2025-05-25 01:51:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:35.456394 | orchestrator | 2025-05-25 01:51:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:35.458200 | orchestrator | 2025-05-25 01:51:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:35.460484 | orchestrator | 2025-05-25 01:51:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:35.461021 | orchestrator | 2025-05-25 01:51:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:38.510727 | orchestrator | 2025-05-25 01:51:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:38.512441 | orchestrator | 2025-05-25 01:51:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:38.514548 | orchestrator | 2025-05-25 01:51:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:38.514674 | orchestrator | 2025-05-25 01:51:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:41.564420 | orchestrator | 2025-05-25 01:51:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:41.565966 | orchestrator | 2025-05-25 01:51:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:41.568024 | orchestrator | 2025-05-25 01:51:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:41.568121 | orchestrator | 2025-05-25 01:51:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:44.612879 | orchestrator | 2025-05-25 01:51:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:44.614294 | orchestrator | 2025-05-25 01:51:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:44.616160 | orchestrator | 2025-05-25 01:51:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:44.616189 | orchestrator | 2025-05-25 01:51:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:47.660554 | orchestrator | 2025-05-25 01:51:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:47.661622 | orchestrator | 2025-05-25 01:51:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:47.663357 | orchestrator | 2025-05-25 01:51:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:47.663387 | orchestrator | 2025-05-25 01:51:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:50.707914 | orchestrator | 2025-05-25 01:51:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:50.708630 | orchestrator | 2025-05-25 01:51:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:50.709704 | orchestrator | 2025-05-25 01:51:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:50.709727 | orchestrator | 2025-05-25 01:51:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:53.755909 | orchestrator | 2025-05-25 01:51:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:53.758076 | orchestrator | 2025-05-25 01:51:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:53.759856 | orchestrator | 2025-05-25 01:51:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:53.759907 | orchestrator | 2025-05-25 01:51:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:56.809164 | orchestrator | 2025-05-25 01:51:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:56.810427 | orchestrator | 2025-05-25 01:51:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:56.812176 | orchestrator | 2025-05-25 01:51:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:56.812207 | orchestrator | 2025-05-25 01:51:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:51:59.872546 | orchestrator | 2025-05-25 01:51:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:51:59.874127 | orchestrator | 2025-05-25 01:51:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:51:59.876818 | orchestrator | 2025-05-25 01:51:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:51:59.876911 | orchestrator | 2025-05-25 01:51:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:02.928641 | orchestrator | 2025-05-25 01:52:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:02.930885 | orchestrator | 2025-05-25 01:52:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:02.932387 | orchestrator | 2025-05-25 01:52:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:02.932421 | orchestrator | 2025-05-25 01:52:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:05.988403 | orchestrator | 2025-05-25 01:52:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:05.993358 | orchestrator | 2025-05-25 01:52:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:05.994119 | orchestrator | 2025-05-25 01:52:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:05.994198 | orchestrator | 2025-05-25 01:52:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:09.054161 | orchestrator | 2025-05-25 01:52:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:09.057975 | orchestrator | 2025-05-25 01:52:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:09.061524 | orchestrator | 2025-05-25 01:52:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:09.061578 | orchestrator | 2025-05-25 01:52:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:12.110348 | orchestrator | 2025-05-25 01:52:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:12.111850 | orchestrator | 2025-05-25 01:52:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:12.114137 | orchestrator | 2025-05-25 01:52:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:12.114632 | orchestrator | 2025-05-25 01:52:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:15.163762 | orchestrator | 2025-05-25 01:52:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:15.165413 | orchestrator | 2025-05-25 01:52:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:15.167987 | orchestrator | 2025-05-25 01:52:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:15.168081 | orchestrator | 2025-05-25 01:52:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:18.220111 | orchestrator | 2025-05-25 01:52:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:18.221647 | orchestrator | 2025-05-25 01:52:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:18.223606 | orchestrator | 2025-05-25 01:52:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:18.223626 | orchestrator | 2025-05-25 01:52:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:21.277212 | orchestrator | 2025-05-25 01:52:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:21.279146 | orchestrator | 2025-05-25 01:52:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:21.280215 | orchestrator | 2025-05-25 01:52:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:21.280244 | orchestrator | 2025-05-25 01:52:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:24.336211 | orchestrator | 2025-05-25 01:52:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:24.336945 | orchestrator | 2025-05-25 01:52:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:24.338589 | orchestrator | 2025-05-25 01:52:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:24.338664 | orchestrator | 2025-05-25 01:52:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:27.389256 | orchestrator | 2025-05-25 01:52:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:27.392100 | orchestrator | 2025-05-25 01:52:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:27.394145 | orchestrator | 2025-05-25 01:52:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:27.394216 | orchestrator | 2025-05-25 01:52:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:30.447534 | orchestrator | 2025-05-25 01:52:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:30.450157 | orchestrator | 2025-05-25 01:52:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:30.453424 | orchestrator | 2025-05-25 01:52:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:30.453546 | orchestrator | 2025-05-25 01:52:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:33.505774 | orchestrator | 2025-05-25 01:52:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:33.506750 | orchestrator | 2025-05-25 01:52:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:33.508082 | orchestrator | 2025-05-25 01:52:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:33.508166 | orchestrator | 2025-05-25 01:52:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:36.558826 | orchestrator | 2025-05-25 01:52:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:36.559830 | orchestrator | 2025-05-25 01:52:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:36.561211 | orchestrator | 2025-05-25 01:52:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:36.561243 | orchestrator | 2025-05-25 01:52:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:39.617354 | orchestrator | 2025-05-25 01:52:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:39.620121 | orchestrator | 2025-05-25 01:52:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:39.621964 | orchestrator | 2025-05-25 01:52:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:39.622113 | orchestrator | 2025-05-25 01:52:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:42.675716 | orchestrator | 2025-05-25 01:52:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:42.678742 | orchestrator | 2025-05-25 01:52:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:42.680227 | orchestrator | 2025-05-25 01:52:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:42.680658 | orchestrator | 2025-05-25 01:52:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:45.731667 | orchestrator | 2025-05-25 01:52:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:45.732153 | orchestrator | 2025-05-25 01:52:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:45.733760 | orchestrator | 2025-05-25 01:52:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:45.733962 | orchestrator | 2025-05-25 01:52:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:48.788082 | orchestrator | 2025-05-25 01:52:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:48.789884 | orchestrator | 2025-05-25 01:52:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:48.791673 | orchestrator | 2025-05-25 01:52:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:48.791699 | orchestrator | 2025-05-25 01:52:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:51.843194 | orchestrator | 2025-05-25 01:52:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:51.844772 | orchestrator | 2025-05-25 01:52:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:51.846187 | orchestrator | 2025-05-25 01:52:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:51.846377 | orchestrator | 2025-05-25 01:52:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:54.898783 | orchestrator | 2025-05-25 01:52:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:54.900644 | orchestrator | 2025-05-25 01:52:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:54.903248 | orchestrator | 2025-05-25 01:52:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:54.903335 | orchestrator | 2025-05-25 01:52:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:52:57.954220 | orchestrator | 2025-05-25 01:52:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:52:57.955220 | orchestrator | 2025-05-25 01:52:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:52:57.956436 | orchestrator | 2025-05-25 01:52:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:52:57.956597 | orchestrator | 2025-05-25 01:52:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:01.012263 | orchestrator | 2025-05-25 01:53:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:01.013498 | orchestrator | 2025-05-25 01:53:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:01.015069 | orchestrator | 2025-05-25 01:53:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:01.015098 | orchestrator | 2025-05-25 01:53:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:04.063508 | orchestrator | 2025-05-25 01:53:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:04.065275 | orchestrator | 2025-05-25 01:53:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:04.067053 | orchestrator | 2025-05-25 01:53:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:04.067163 | orchestrator | 2025-05-25 01:53:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:07.126986 | orchestrator | 2025-05-25 01:53:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:07.129041 | orchestrator | 2025-05-25 01:53:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:07.130913 | orchestrator | 2025-05-25 01:53:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:07.131040 | orchestrator | 2025-05-25 01:53:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:10.183065 | orchestrator | 2025-05-25 01:53:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:10.185819 | orchestrator | 2025-05-25 01:53:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:10.188118 | orchestrator | 2025-05-25 01:53:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:10.188342 | orchestrator | 2025-05-25 01:53:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:13.241060 | orchestrator | 2025-05-25 01:53:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:13.242736 | orchestrator | 2025-05-25 01:53:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:13.244753 | orchestrator | 2025-05-25 01:53:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:13.244856 | orchestrator | 2025-05-25 01:53:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:16.292097 | orchestrator | 2025-05-25 01:53:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:16.295164 | orchestrator | 2025-05-25 01:53:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:16.296755 | orchestrator | 2025-05-25 01:53:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:16.296970 | orchestrator | 2025-05-25 01:53:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:19.348068 | orchestrator | 2025-05-25 01:53:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:19.350242 | orchestrator | 2025-05-25 01:53:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:19.353457 | orchestrator | 2025-05-25 01:53:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:19.353510 | orchestrator | 2025-05-25 01:53:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:22.412138 | orchestrator | 2025-05-25 01:53:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:22.413806 | orchestrator | 2025-05-25 01:53:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:22.415842 | orchestrator | 2025-05-25 01:53:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:22.415877 | orchestrator | 2025-05-25 01:53:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:25.463137 | orchestrator | 2025-05-25 01:53:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:25.463378 | orchestrator | 2025-05-25 01:53:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:25.464228 | orchestrator | 2025-05-25 01:53:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:25.464363 | orchestrator | 2025-05-25 01:53:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:28.512168 | orchestrator | 2025-05-25 01:53:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:28.513194 | orchestrator | 2025-05-25 01:53:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:28.514935 | orchestrator | 2025-05-25 01:53:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:28.515040 | orchestrator | 2025-05-25 01:53:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:31.557074 | orchestrator | 2025-05-25 01:53:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:31.557344 | orchestrator | 2025-05-25 01:53:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:31.559843 | orchestrator | 2025-05-25 01:53:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:31.559893 | orchestrator | 2025-05-25 01:53:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:34.606976 | orchestrator | 2025-05-25 01:53:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:34.608472 | orchestrator | 2025-05-25 01:53:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:34.609794 | orchestrator | 2025-05-25 01:53:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:34.609823 | orchestrator | 2025-05-25 01:53:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:37.659848 | orchestrator | 2025-05-25 01:53:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:37.661417 | orchestrator | 2025-05-25 01:53:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:37.663525 | orchestrator | 2025-05-25 01:53:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:37.663560 | orchestrator | 2025-05-25 01:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:40.714242 | orchestrator | 2025-05-25 01:53:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:40.714774 | orchestrator | 2025-05-25 01:53:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:40.716025 | orchestrator | 2025-05-25 01:53:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:40.717047 | orchestrator | 2025-05-25 01:53:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:43.765328 | orchestrator | 2025-05-25 01:53:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:43.766849 | orchestrator | 2025-05-25 01:53:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:43.769107 | orchestrator | 2025-05-25 01:53:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:43.769236 | orchestrator | 2025-05-25 01:53:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:46.815433 | orchestrator | 2025-05-25 01:53:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:46.817631 | orchestrator | 2025-05-25 01:53:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:46.820295 | orchestrator | 2025-05-25 01:53:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:46.820396 | orchestrator | 2025-05-25 01:53:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:49.872286 | orchestrator | 2025-05-25 01:53:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:49.874533 | orchestrator | 2025-05-25 01:53:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:49.876852 | orchestrator | 2025-05-25 01:53:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:49.877373 | orchestrator | 2025-05-25 01:53:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:52.931037 | orchestrator | 2025-05-25 01:53:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:52.931738 | orchestrator | 2025-05-25 01:53:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:52.933562 | orchestrator | 2025-05-25 01:53:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:52.933608 | orchestrator | 2025-05-25 01:53:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:55.987242 | orchestrator | 2025-05-25 01:53:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:55.988624 | orchestrator | 2025-05-25 01:53:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:55.990986 | orchestrator | 2025-05-25 01:53:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:55.991528 | orchestrator | 2025-05-25 01:53:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:53:59.045719 | orchestrator | 2025-05-25 01:53:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:53:59.046696 | orchestrator | 2025-05-25 01:53:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:53:59.047912 | orchestrator | 2025-05-25 01:53:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:53:59.047937 | orchestrator | 2025-05-25 01:53:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:02.094951 | orchestrator | 2025-05-25 01:54:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:02.095708 | orchestrator | 2025-05-25 01:54:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:02.097680 | orchestrator | 2025-05-25 01:54:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:02.097755 | orchestrator | 2025-05-25 01:54:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:05.147377 | orchestrator | 2025-05-25 01:54:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:05.148805 | orchestrator | 2025-05-25 01:54:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:05.150287 | orchestrator | 2025-05-25 01:54:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:05.150356 | orchestrator | 2025-05-25 01:54:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:08.195218 | orchestrator | 2025-05-25 01:54:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:08.197526 | orchestrator | 2025-05-25 01:54:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:08.199281 | orchestrator | 2025-05-25 01:54:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:08.199338 | orchestrator | 2025-05-25 01:54:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:11.251376 | orchestrator | 2025-05-25 01:54:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:11.252737 | orchestrator | 2025-05-25 01:54:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:11.254182 | orchestrator | 2025-05-25 01:54:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:11.254226 | orchestrator | 2025-05-25 01:54:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:14.297049 | orchestrator | 2025-05-25 01:54:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:14.298961 | orchestrator | 2025-05-25 01:54:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:14.301430 | orchestrator | 2025-05-25 01:54:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:14.301461 | orchestrator | 2025-05-25 01:54:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:17.345261 | orchestrator | 2025-05-25 01:54:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:17.347262 | orchestrator | 2025-05-25 01:54:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:17.349988 | orchestrator | 2025-05-25 01:54:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:17.350068 | orchestrator | 2025-05-25 01:54:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:20.399867 | orchestrator | 2025-05-25 01:54:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:20.401763 | orchestrator | 2025-05-25 01:54:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:20.403477 | orchestrator | 2025-05-25 01:54:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:20.403726 | orchestrator | 2025-05-25 01:54:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:23.452949 | orchestrator | 2025-05-25 01:54:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:23.454373 | orchestrator | 2025-05-25 01:54:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:23.455812 | orchestrator | 2025-05-25 01:54:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:23.455959 | orchestrator | 2025-05-25 01:54:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:26.509692 | orchestrator | 2025-05-25 01:54:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:26.511237 | orchestrator | 2025-05-25 01:54:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:26.513441 | orchestrator | 2025-05-25 01:54:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:26.513477 | orchestrator | 2025-05-25 01:54:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:29.564080 | orchestrator | 2025-05-25 01:54:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:29.565088 | orchestrator | 2025-05-25 01:54:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:29.566547 | orchestrator | 2025-05-25 01:54:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:29.566576 | orchestrator | 2025-05-25 01:54:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:32.613613 | orchestrator | 2025-05-25 01:54:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:32.615171 | orchestrator | 2025-05-25 01:54:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:32.617548 | orchestrator | 2025-05-25 01:54:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:32.617624 | orchestrator | 2025-05-25 01:54:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:35.666265 | orchestrator | 2025-05-25 01:54:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:35.667609 | orchestrator | 2025-05-25 01:54:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:35.669083 | orchestrator | 2025-05-25 01:54:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:35.669109 | orchestrator | 2025-05-25 01:54:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:38.713104 | orchestrator | 2025-05-25 01:54:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:38.714634 | orchestrator | 2025-05-25 01:54:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:38.716533 | orchestrator | 2025-05-25 01:54:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:38.716609 | orchestrator | 2025-05-25 01:54:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:41.762808 | orchestrator | 2025-05-25 01:54:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:41.763524 | orchestrator | 2025-05-25 01:54:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:41.765288 | orchestrator | 2025-05-25 01:54:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:41.765313 | orchestrator | 2025-05-25 01:54:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:44.817924 | orchestrator | 2025-05-25 01:54:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:44.820086 | orchestrator | 2025-05-25 01:54:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:44.821299 | orchestrator | 2025-05-25 01:54:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:44.821368 | orchestrator | 2025-05-25 01:54:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:47.867975 | orchestrator | 2025-05-25 01:54:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:47.870127 | orchestrator | 2025-05-25 01:54:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:47.872873 | orchestrator | 2025-05-25 01:54:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:47.872915 | orchestrator | 2025-05-25 01:54:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:50.927198 | orchestrator | 2025-05-25 01:54:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:50.928450 | orchestrator | 2025-05-25 01:54:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:50.930595 | orchestrator | 2025-05-25 01:54:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:50.930627 | orchestrator | 2025-05-25 01:54:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:53.981478 | orchestrator | 2025-05-25 01:54:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:53.982581 | orchestrator | 2025-05-25 01:54:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:53.986827 | orchestrator | 2025-05-25 01:54:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:53.986904 | orchestrator | 2025-05-25 01:54:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:54:57.040024 | orchestrator | 2025-05-25 01:54:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:54:57.041438 | orchestrator | 2025-05-25 01:54:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:54:57.042442 | orchestrator | 2025-05-25 01:54:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:54:57.042465 | orchestrator | 2025-05-25 01:54:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:00.095960 | orchestrator | 2025-05-25 01:55:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:00.096480 | orchestrator | 2025-05-25 01:55:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:00.100060 | orchestrator | 2025-05-25 01:55:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:00.100135 | orchestrator | 2025-05-25 01:55:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:03.148264 | orchestrator | 2025-05-25 01:55:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:03.148935 | orchestrator | 2025-05-25 01:55:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:03.150721 | orchestrator | 2025-05-25 01:55:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:03.150801 | orchestrator | 2025-05-25 01:55:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:06.203794 | orchestrator | 2025-05-25 01:55:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:06.205721 | orchestrator | 2025-05-25 01:55:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:06.209140 | orchestrator | 2025-05-25 01:55:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:06.209323 | orchestrator | 2025-05-25 01:55:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:09.261680 | orchestrator | 2025-05-25 01:55:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:09.263198 | orchestrator | 2025-05-25 01:55:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:09.265250 | orchestrator | 2025-05-25 01:55:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:09.265294 | orchestrator | 2025-05-25 01:55:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:12.324155 | orchestrator | 2025-05-25 01:55:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:12.325917 | orchestrator | 2025-05-25 01:55:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:12.327624 | orchestrator | 2025-05-25 01:55:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:12.327665 | orchestrator | 2025-05-25 01:55:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:15.390918 | orchestrator | 2025-05-25 01:55:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:15.391087 | orchestrator | 2025-05-25 01:55:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:15.391103 | orchestrator | 2025-05-25 01:55:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:15.391263 | orchestrator | 2025-05-25 01:55:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:18.443362 | orchestrator | 2025-05-25 01:55:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:18.444703 | orchestrator | 2025-05-25 01:55:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:18.446795 | orchestrator | 2025-05-25 01:55:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:18.446903 | orchestrator | 2025-05-25 01:55:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:21.494748 | orchestrator | 2025-05-25 01:55:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:21.497540 | orchestrator | 2025-05-25 01:55:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:21.499752 | orchestrator | 2025-05-25 01:55:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:21.499792 | orchestrator | 2025-05-25 01:55:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:24.554097 | orchestrator | 2025-05-25 01:55:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:24.554680 | orchestrator | 2025-05-25 01:55:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:24.557973 | orchestrator | 2025-05-25 01:55:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:24.558068 | orchestrator | 2025-05-25 01:55:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:27.611962 | orchestrator | 2025-05-25 01:55:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:27.613869 | orchestrator | 2025-05-25 01:55:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:27.615985 | orchestrator | 2025-05-25 01:55:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:27.616052 | orchestrator | 2025-05-25 01:55:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:30.664297 | orchestrator | 2025-05-25 01:55:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:30.668048 | orchestrator | 2025-05-25 01:55:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:30.670374 | orchestrator | 2025-05-25 01:55:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:30.670450 | orchestrator | 2025-05-25 01:55:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:33.723502 | orchestrator | 2025-05-25 01:55:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:33.725871 | orchestrator | 2025-05-25 01:55:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:33.727657 | orchestrator | 2025-05-25 01:55:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:33.727709 | orchestrator | 2025-05-25 01:55:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:36.781214 | orchestrator | 2025-05-25 01:55:36 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:36.784942 | orchestrator | 2025-05-25 01:55:36 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:36.787647 | orchestrator | 2025-05-25 01:55:36 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:36.787682 | orchestrator | 2025-05-25 01:55:36 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:39.838803 | orchestrator | 2025-05-25 01:55:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:39.839861 | orchestrator | 2025-05-25 01:55:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:39.843216 | orchestrator | 2025-05-25 01:55:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:39.843243 | orchestrator | 2025-05-25 01:55:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:42.898849 | orchestrator | 2025-05-25 01:55:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:42.902478 | orchestrator | 2025-05-25 01:55:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:42.904106 | orchestrator | 2025-05-25 01:55:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:42.904150 | orchestrator | 2025-05-25 01:55:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:45.955748 | orchestrator | 2025-05-25 01:55:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:45.959669 | orchestrator | 2025-05-25 01:55:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:45.960753 | orchestrator | 2025-05-25 01:55:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:45.960830 | orchestrator | 2025-05-25 01:55:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:49.010450 | orchestrator | 2025-05-25 01:55:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:49.012555 | orchestrator | 2025-05-25 01:55:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:49.014295 | orchestrator | 2025-05-25 01:55:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:49.014490 | orchestrator | 2025-05-25 01:55:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:52.072112 | orchestrator | 2025-05-25 01:55:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:52.074254 | orchestrator | 2025-05-25 01:55:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:52.074774 | orchestrator | 2025-05-25 01:55:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:52.074798 | orchestrator | 2025-05-25 01:55:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:55.134993 | orchestrator | 2025-05-25 01:55:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:55.137628 | orchestrator | 2025-05-25 01:55:55 | INFO  | Task 953c4869-4328-42c7-bcfa-68861f2aa4c5 is in state STARTED 2025-05-25 01:55:55.139782 | orchestrator | 2025-05-25 01:55:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:55.142441 | orchestrator | 2025-05-25 01:55:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:55.143035 | orchestrator | 2025-05-25 01:55:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:55:58.204595 | orchestrator | 2025-05-25 01:55:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:55:58.206359 | orchestrator | 2025-05-25 01:55:58 | INFO  | Task 953c4869-4328-42c7-bcfa-68861f2aa4c5 is in state STARTED 2025-05-25 01:55:58.208798 | orchestrator | 2025-05-25 01:55:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:55:58.211860 | orchestrator | 2025-05-25 01:55:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:55:58.212038 | orchestrator | 2025-05-25 01:55:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:01.265251 | orchestrator | 2025-05-25 01:56:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:01.266083 | orchestrator | 2025-05-25 01:56:01 | INFO  | Task 953c4869-4328-42c7-bcfa-68861f2aa4c5 is in state STARTED 2025-05-25 01:56:01.267249 | orchestrator | 2025-05-25 01:56:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:01.269094 | orchestrator | 2025-05-25 01:56:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:01.269564 | orchestrator | 2025-05-25 01:56:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:04.314688 | orchestrator | 2025-05-25 01:56:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:04.316434 | orchestrator | 2025-05-25 01:56:04 | INFO  | Task 953c4869-4328-42c7-bcfa-68861f2aa4c5 is in state STARTED 2025-05-25 01:56:04.318261 | orchestrator | 2025-05-25 01:56:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:04.319756 | orchestrator | 2025-05-25 01:56:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:04.320139 | orchestrator | 2025-05-25 01:56:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:07.367449 | orchestrator | 2025-05-25 01:56:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:07.367785 | orchestrator | 2025-05-25 01:56:07 | INFO  | Task 953c4869-4328-42c7-bcfa-68861f2aa4c5 is in state SUCCESS 2025-05-25 01:56:07.369621 | orchestrator | 2025-05-25 01:56:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:07.371370 | orchestrator | 2025-05-25 01:56:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:07.371535 | orchestrator | 2025-05-25 01:56:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:10.425684 | orchestrator | 2025-05-25 01:56:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:10.427271 | orchestrator | 2025-05-25 01:56:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:10.428297 | orchestrator | 2025-05-25 01:56:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:10.428424 | orchestrator | 2025-05-25 01:56:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:13.479304 | orchestrator | 2025-05-25 01:56:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:13.480575 | orchestrator | 2025-05-25 01:56:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:13.484669 | orchestrator | 2025-05-25 01:56:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:13.484703 | orchestrator | 2025-05-25 01:56:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:16.530173 | orchestrator | 2025-05-25 01:56:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:16.531550 | orchestrator | 2025-05-25 01:56:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:16.534121 | orchestrator | 2025-05-25 01:56:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:16.534207 | orchestrator | 2025-05-25 01:56:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:19.583472 | orchestrator | 2025-05-25 01:56:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:19.584264 | orchestrator | 2025-05-25 01:56:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:19.585587 | orchestrator | 2025-05-25 01:56:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:19.585620 | orchestrator | 2025-05-25 01:56:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:22.640400 | orchestrator | 2025-05-25 01:56:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:22.642372 | orchestrator | 2025-05-25 01:56:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:22.645847 | orchestrator | 2025-05-25 01:56:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:22.645870 | orchestrator | 2025-05-25 01:56:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:25.700146 | orchestrator | 2025-05-25 01:56:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:25.700415 | orchestrator | 2025-05-25 01:56:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:25.702065 | orchestrator | 2025-05-25 01:56:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:25.702194 | orchestrator | 2025-05-25 01:56:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:28.758322 | orchestrator | 2025-05-25 01:56:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:28.760185 | orchestrator | 2025-05-25 01:56:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:28.761134 | orchestrator | 2025-05-25 01:56:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:28.761160 | orchestrator | 2025-05-25 01:56:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:31.811034 | orchestrator | 2025-05-25 01:56:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:31.814231 | orchestrator | 2025-05-25 01:56:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:31.816528 | orchestrator | 2025-05-25 01:56:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:31.816589 | orchestrator | 2025-05-25 01:56:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:34.873554 | orchestrator | 2025-05-25 01:56:34 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:34.875242 | orchestrator | 2025-05-25 01:56:34 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:34.877036 | orchestrator | 2025-05-25 01:56:34 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:34.877079 | orchestrator | 2025-05-25 01:56:34 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:37.930773 | orchestrator | 2025-05-25 01:56:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:37.933135 | orchestrator | 2025-05-25 01:56:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:37.934867 | orchestrator | 2025-05-25 01:56:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:37.934913 | orchestrator | 2025-05-25 01:56:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:40.986579 | orchestrator | 2025-05-25 01:56:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:40.987988 | orchestrator | 2025-05-25 01:56:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:40.989761 | orchestrator | 2025-05-25 01:56:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:40.989850 | orchestrator | 2025-05-25 01:56:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:44.050006 | orchestrator | 2025-05-25 01:56:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:44.051011 | orchestrator | 2025-05-25 01:56:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:44.052672 | orchestrator | 2025-05-25 01:56:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:44.052700 | orchestrator | 2025-05-25 01:56:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:47.108162 | orchestrator | 2025-05-25 01:56:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:47.109507 | orchestrator | 2025-05-25 01:56:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:47.111283 | orchestrator | 2025-05-25 01:56:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:47.111308 | orchestrator | 2025-05-25 01:56:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:50.167152 | orchestrator | 2025-05-25 01:56:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:50.169148 | orchestrator | 2025-05-25 01:56:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:50.170753 | orchestrator | 2025-05-25 01:56:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:50.170849 | orchestrator | 2025-05-25 01:56:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:53.220453 | orchestrator | 2025-05-25 01:56:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:53.221962 | orchestrator | 2025-05-25 01:56:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:53.223050 | orchestrator | 2025-05-25 01:56:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:53.223079 | orchestrator | 2025-05-25 01:56:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:56.271569 | orchestrator | 2025-05-25 01:56:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:56.272733 | orchestrator | 2025-05-25 01:56:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:56.273903 | orchestrator | 2025-05-25 01:56:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:56.273972 | orchestrator | 2025-05-25 01:56:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:56:59.328509 | orchestrator | 2025-05-25 01:56:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:56:59.329562 | orchestrator | 2025-05-25 01:56:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:56:59.331366 | orchestrator | 2025-05-25 01:56:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:56:59.331439 | orchestrator | 2025-05-25 01:56:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:02.382676 | orchestrator | 2025-05-25 01:57:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:02.383130 | orchestrator | 2025-05-25 01:57:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:02.385221 | orchestrator | 2025-05-25 01:57:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:02.385282 | orchestrator | 2025-05-25 01:57:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:05.436483 | orchestrator | 2025-05-25 01:57:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:05.437514 | orchestrator | 2025-05-25 01:57:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:05.439113 | orchestrator | 2025-05-25 01:57:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:05.439139 | orchestrator | 2025-05-25 01:57:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:08.487897 | orchestrator | 2025-05-25 01:57:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:08.488532 | orchestrator | 2025-05-25 01:57:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:08.489826 | orchestrator | 2025-05-25 01:57:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:08.489854 | orchestrator | 2025-05-25 01:57:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:11.541101 | orchestrator | 2025-05-25 01:57:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:11.542074 | orchestrator | 2025-05-25 01:57:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:11.544110 | orchestrator | 2025-05-25 01:57:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:11.544149 | orchestrator | 2025-05-25 01:57:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:14.604108 | orchestrator | 2025-05-25 01:57:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:14.608174 | orchestrator | 2025-05-25 01:57:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:14.609436 | orchestrator | 2025-05-25 01:57:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:14.609486 | orchestrator | 2025-05-25 01:57:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:17.656701 | orchestrator | 2025-05-25 01:57:17 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:17.658316 | orchestrator | 2025-05-25 01:57:17 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:17.659755 | orchestrator | 2025-05-25 01:57:17 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:17.659778 | orchestrator | 2025-05-25 01:57:17 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:20.713731 | orchestrator | 2025-05-25 01:57:20 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:20.715029 | orchestrator | 2025-05-25 01:57:20 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:20.716663 | orchestrator | 2025-05-25 01:57:20 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:20.716692 | orchestrator | 2025-05-25 01:57:20 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:23.762768 | orchestrator | 2025-05-25 01:57:23 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:23.763016 | orchestrator | 2025-05-25 01:57:23 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:23.764545 | orchestrator | 2025-05-25 01:57:23 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:23.764841 | orchestrator | 2025-05-25 01:57:23 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:26.814979 | orchestrator | 2025-05-25 01:57:26 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:26.815178 | orchestrator | 2025-05-25 01:57:26 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:26.816547 | orchestrator | 2025-05-25 01:57:26 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:26.816741 | orchestrator | 2025-05-25 01:57:26 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:29.869051 | orchestrator | 2025-05-25 01:57:29 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:29.870453 | orchestrator | 2025-05-25 01:57:29 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:29.871918 | orchestrator | 2025-05-25 01:57:29 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:29.872048 | orchestrator | 2025-05-25 01:57:29 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:32.921947 | orchestrator | 2025-05-25 01:57:32 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:32.923477 | orchestrator | 2025-05-25 01:57:32 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:32.925247 | orchestrator | 2025-05-25 01:57:32 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:32.925802 | orchestrator | 2025-05-25 01:57:32 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:35.976265 | orchestrator | 2025-05-25 01:57:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:35.977484 | orchestrator | 2025-05-25 01:57:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:35.979058 | orchestrator | 2025-05-25 01:57:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:35.979118 | orchestrator | 2025-05-25 01:57:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:39.026885 | orchestrator | 2025-05-25 01:57:39 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:39.027840 | orchestrator | 2025-05-25 01:57:39 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:39.029519 | orchestrator | 2025-05-25 01:57:39 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:39.029613 | orchestrator | 2025-05-25 01:57:39 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:42.077682 | orchestrator | 2025-05-25 01:57:42 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:42.078715 | orchestrator | 2025-05-25 01:57:42 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:42.080097 | orchestrator | 2025-05-25 01:57:42 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:42.080134 | orchestrator | 2025-05-25 01:57:42 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:45.127302 | orchestrator | 2025-05-25 01:57:45 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:45.128115 | orchestrator | 2025-05-25 01:57:45 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:45.129427 | orchestrator | 2025-05-25 01:57:45 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:45.129638 | orchestrator | 2025-05-25 01:57:45 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:48.177012 | orchestrator | 2025-05-25 01:57:48 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:48.177980 | orchestrator | 2025-05-25 01:57:48 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:48.180436 | orchestrator | 2025-05-25 01:57:48 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:48.180473 | orchestrator | 2025-05-25 01:57:48 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:51.230297 | orchestrator | 2025-05-25 01:57:51 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:51.231705 | orchestrator | 2025-05-25 01:57:51 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:51.233148 | orchestrator | 2025-05-25 01:57:51 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:51.233181 | orchestrator | 2025-05-25 01:57:51 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:54.277038 | orchestrator | 2025-05-25 01:57:54 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:54.278209 | orchestrator | 2025-05-25 01:57:54 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:54.279291 | orchestrator | 2025-05-25 01:57:54 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:54.279554 | orchestrator | 2025-05-25 01:57:54 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:57:57.326979 | orchestrator | 2025-05-25 01:57:57 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:57:57.328189 | orchestrator | 2025-05-25 01:57:57 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:57:57.329718 | orchestrator | 2025-05-25 01:57:57 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:57:57.329741 | orchestrator | 2025-05-25 01:57:57 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:00.373037 | orchestrator | 2025-05-25 01:58:00 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:00.373218 | orchestrator | 2025-05-25 01:58:00 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:00.374427 | orchestrator | 2025-05-25 01:58:00 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:00.374547 | orchestrator | 2025-05-25 01:58:00 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:03.424127 | orchestrator | 2025-05-25 01:58:03 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:03.426206 | orchestrator | 2025-05-25 01:58:03 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:03.428648 | orchestrator | 2025-05-25 01:58:03 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:03.428678 | orchestrator | 2025-05-25 01:58:03 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:06.482423 | orchestrator | 2025-05-25 01:58:06 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:06.483649 | orchestrator | 2025-05-25 01:58:06 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:06.489302 | orchestrator | 2025-05-25 01:58:06 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:06.489416 | orchestrator | 2025-05-25 01:58:06 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:09.538151 | orchestrator | 2025-05-25 01:58:09 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:09.539471 | orchestrator | 2025-05-25 01:58:09 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:09.541027 | orchestrator | 2025-05-25 01:58:09 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:09.541057 | orchestrator | 2025-05-25 01:58:09 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:12.590169 | orchestrator | 2025-05-25 01:58:12 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:12.591678 | orchestrator | 2025-05-25 01:58:12 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:12.593186 | orchestrator | 2025-05-25 01:58:12 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:12.593218 | orchestrator | 2025-05-25 01:58:12 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:15.646252 | orchestrator | 2025-05-25 01:58:15 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:15.646947 | orchestrator | 2025-05-25 01:58:15 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:15.648445 | orchestrator | 2025-05-25 01:58:15 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:15.648485 | orchestrator | 2025-05-25 01:58:15 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:18.700297 | orchestrator | 2025-05-25 01:58:18 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:18.703724 | orchestrator | 2025-05-25 01:58:18 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:18.705183 | orchestrator | 2025-05-25 01:58:18 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:18.705218 | orchestrator | 2025-05-25 01:58:18 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:21.751004 | orchestrator | 2025-05-25 01:58:21 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:21.752185 | orchestrator | 2025-05-25 01:58:21 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:21.753623 | orchestrator | 2025-05-25 01:58:21 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:21.753706 | orchestrator | 2025-05-25 01:58:21 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:24.798239 | orchestrator | 2025-05-25 01:58:24 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:24.800517 | orchestrator | 2025-05-25 01:58:24 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:24.803079 | orchestrator | 2025-05-25 01:58:24 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:24.803125 | orchestrator | 2025-05-25 01:58:24 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:27.854802 | orchestrator | 2025-05-25 01:58:27 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:27.856601 | orchestrator | 2025-05-25 01:58:27 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:27.858289 | orchestrator | 2025-05-25 01:58:27 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:27.858323 | orchestrator | 2025-05-25 01:58:27 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:30.912601 | orchestrator | 2025-05-25 01:58:30 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:30.913516 | orchestrator | 2025-05-25 01:58:30 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:30.915483 | orchestrator | 2025-05-25 01:58:30 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:30.915510 | orchestrator | 2025-05-25 01:58:30 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:33.968454 | orchestrator | 2025-05-25 01:58:33 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:33.969232 | orchestrator | 2025-05-25 01:58:33 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:33.971871 | orchestrator | 2025-05-25 01:58:33 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:33.971938 | orchestrator | 2025-05-25 01:58:33 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:37.020576 | orchestrator | 2025-05-25 01:58:37 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:37.022743 | orchestrator | 2025-05-25 01:58:37 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:37.025044 | orchestrator | 2025-05-25 01:58:37 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:37.025093 | orchestrator | 2025-05-25 01:58:37 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:40.077687 | orchestrator | 2025-05-25 01:58:40 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:40.078747 | orchestrator | 2025-05-25 01:58:40 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:40.080839 | orchestrator | 2025-05-25 01:58:40 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:40.080893 | orchestrator | 2025-05-25 01:58:40 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:43.130314 | orchestrator | 2025-05-25 01:58:43 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:43.131442 | orchestrator | 2025-05-25 01:58:43 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:43.132826 | orchestrator | 2025-05-25 01:58:43 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:43.132862 | orchestrator | 2025-05-25 01:58:43 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:46.182287 | orchestrator | 2025-05-25 01:58:46 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:46.183174 | orchestrator | 2025-05-25 01:58:46 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:46.184758 | orchestrator | 2025-05-25 01:58:46 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:46.184786 | orchestrator | 2025-05-25 01:58:46 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:49.237220 | orchestrator | 2025-05-25 01:58:49 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:49.238885 | orchestrator | 2025-05-25 01:58:49 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:49.240922 | orchestrator | 2025-05-25 01:58:49 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:49.240954 | orchestrator | 2025-05-25 01:58:49 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:52.278470 | orchestrator | 2025-05-25 01:58:52 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:52.280173 | orchestrator | 2025-05-25 01:58:52 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:52.280193 | orchestrator | 2025-05-25 01:58:52 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:52.280201 | orchestrator | 2025-05-25 01:58:52 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:55.325794 | orchestrator | 2025-05-25 01:58:55 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:55.327360 | orchestrator | 2025-05-25 01:58:55 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:55.328757 | orchestrator | 2025-05-25 01:58:55 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:55.328860 | orchestrator | 2025-05-25 01:58:55 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:58:58.382894 | orchestrator | 2025-05-25 01:58:58 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:58:58.384945 | orchestrator | 2025-05-25 01:58:58 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:58:58.386531 | orchestrator | 2025-05-25 01:58:58 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:58:58.386605 | orchestrator | 2025-05-25 01:58:58 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:01.443065 | orchestrator | 2025-05-25 01:59:01 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:01.444022 | orchestrator | 2025-05-25 01:59:01 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:01.445985 | orchestrator | 2025-05-25 01:59:01 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:01.446063 | orchestrator | 2025-05-25 01:59:01 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:04.493928 | orchestrator | 2025-05-25 01:59:04 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:04.494693 | orchestrator | 2025-05-25 01:59:04 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:04.496632 | orchestrator | 2025-05-25 01:59:04 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:04.496667 | orchestrator | 2025-05-25 01:59:04 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:07.546543 | orchestrator | 2025-05-25 01:59:07 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:07.547441 | orchestrator | 2025-05-25 01:59:07 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:07.548609 | orchestrator | 2025-05-25 01:59:07 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:07.548634 | orchestrator | 2025-05-25 01:59:07 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:10.600628 | orchestrator | 2025-05-25 01:59:10 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:10.602319 | orchestrator | 2025-05-25 01:59:10 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:10.604970 | orchestrator | 2025-05-25 01:59:10 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:10.605008 | orchestrator | 2025-05-25 01:59:10 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:13.654753 | orchestrator | 2025-05-25 01:59:13 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:13.656727 | orchestrator | 2025-05-25 01:59:13 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:13.658887 | orchestrator | 2025-05-25 01:59:13 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:13.658977 | orchestrator | 2025-05-25 01:59:13 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:16.713764 | orchestrator | 2025-05-25 01:59:16 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:16.714890 | orchestrator | 2025-05-25 01:59:16 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:16.716195 | orchestrator | 2025-05-25 01:59:16 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:16.716276 | orchestrator | 2025-05-25 01:59:16 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:19.766879 | orchestrator | 2025-05-25 01:59:19 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:19.767853 | orchestrator | 2025-05-25 01:59:19 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:19.769648 | orchestrator | 2025-05-25 01:59:19 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:19.769682 | orchestrator | 2025-05-25 01:59:19 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:22.816623 | orchestrator | 2025-05-25 01:59:22 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:22.819836 | orchestrator | 2025-05-25 01:59:22 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:22.821149 | orchestrator | 2025-05-25 01:59:22 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:22.821243 | orchestrator | 2025-05-25 01:59:22 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:25.874321 | orchestrator | 2025-05-25 01:59:25 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:25.875284 | orchestrator | 2025-05-25 01:59:25 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:25.877005 | orchestrator | 2025-05-25 01:59:25 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:25.877055 | orchestrator | 2025-05-25 01:59:25 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:28.916515 | orchestrator | 2025-05-25 01:59:28 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:28.917561 | orchestrator | 2025-05-25 01:59:28 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:28.918792 | orchestrator | 2025-05-25 01:59:28 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:28.918854 | orchestrator | 2025-05-25 01:59:28 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:31.962877 | orchestrator | 2025-05-25 01:59:31 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:31.964854 | orchestrator | 2025-05-25 01:59:31 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:31.967647 | orchestrator | 2025-05-25 01:59:31 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:31.967682 | orchestrator | 2025-05-25 01:59:31 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:35.013900 | orchestrator | 2025-05-25 01:59:35 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:35.015089 | orchestrator | 2025-05-25 01:59:35 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:35.016299 | orchestrator | 2025-05-25 01:59:35 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:35.016451 | orchestrator | 2025-05-25 01:59:35 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:38.063226 | orchestrator | 2025-05-25 01:59:38 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:38.064055 | orchestrator | 2025-05-25 01:59:38 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:38.065313 | orchestrator | 2025-05-25 01:59:38 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:38.065345 | orchestrator | 2025-05-25 01:59:38 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:41.118000 | orchestrator | 2025-05-25 01:59:41 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:41.119696 | orchestrator | 2025-05-25 01:59:41 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:41.122131 | orchestrator | 2025-05-25 01:59:41 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:41.122179 | orchestrator | 2025-05-25 01:59:41 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:44.180145 | orchestrator | 2025-05-25 01:59:44 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:44.181308 | orchestrator | 2025-05-25 01:59:44 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:44.183259 | orchestrator | 2025-05-25 01:59:44 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:44.183293 | orchestrator | 2025-05-25 01:59:44 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:47.238329 | orchestrator | 2025-05-25 01:59:47 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:47.240128 | orchestrator | 2025-05-25 01:59:47 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:47.241091 | orchestrator | 2025-05-25 01:59:47 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:47.241134 | orchestrator | 2025-05-25 01:59:47 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:50.295125 | orchestrator | 2025-05-25 01:59:50 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:50.296467 | orchestrator | 2025-05-25 01:59:50 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:50.298513 | orchestrator | 2025-05-25 01:59:50 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:50.298553 | orchestrator | 2025-05-25 01:59:50 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:53.349442 | orchestrator | 2025-05-25 01:59:53 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:53.350528 | orchestrator | 2025-05-25 01:59:53 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:53.352208 | orchestrator | 2025-05-25 01:59:53 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:53.352241 | orchestrator | 2025-05-25 01:59:53 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:56.409486 | orchestrator | 2025-05-25 01:59:56 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:56.411159 | orchestrator | 2025-05-25 01:59:56 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:56.412934 | orchestrator | 2025-05-25 01:59:56 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:56.413018 | orchestrator | 2025-05-25 01:59:56 | INFO  | Wait 1 second(s) until the next check 2025-05-25 01:59:59.468724 | orchestrator | 2025-05-25 01:59:59 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 01:59:59.469551 | orchestrator | 2025-05-25 01:59:59 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 01:59:59.472317 | orchestrator | 2025-05-25 01:59:59 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 01:59:59.472348 | orchestrator | 2025-05-25 01:59:59 | INFO  | Wait 1 second(s) until the next check 2025-05-25 02:00:02.526939 | orchestrator | 2025-05-25 02:00:02 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 02:00:02.527952 | orchestrator | 2025-05-25 02:00:02 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 02:00:02.528062 | orchestrator | 2025-05-25 02:00:02 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 02:00:02.528128 | orchestrator | 2025-05-25 02:00:02 | INFO  | Wait 1 second(s) until the next check 2025-05-25 02:00:05.574085 | orchestrator | 2025-05-25 02:00:05 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 02:00:05.575493 | orchestrator | 2025-05-25 02:00:05 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 02:00:05.580176 | orchestrator | 2025-05-25 02:00:05 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 02:00:05.580206 | orchestrator | 2025-05-25 02:00:05 | INFO  | Wait 1 second(s) until the next check 2025-05-25 02:00:08.623992 | orchestrator | 2025-05-25 02:00:08 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 02:00:08.625901 | orchestrator | 2025-05-25 02:00:08 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 02:00:08.628274 | orchestrator | 2025-05-25 02:00:08 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 02:00:08.628461 | orchestrator | 2025-05-25 02:00:08 | INFO  | Wait 1 second(s) until the next check 2025-05-25 02:00:11.677477 | orchestrator | 2025-05-25 02:00:11 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 02:00:11.678624 | orchestrator | 2025-05-25 02:00:11 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 02:00:11.680064 | orchestrator | 2025-05-25 02:00:11 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 02:00:11.680107 | orchestrator | 2025-05-25 02:00:11 | INFO  | Wait 1 second(s) until the next check 2025-05-25 02:00:14.734446 | orchestrator | 2025-05-25 02:00:14 | INFO  | Task 99824a24-efa4-450e-bb69-b7a40fafd92a is in state STARTED 2025-05-25 02:00:14.736253 | orchestrator | 2025-05-25 02:00:14 | INFO  | Task 8e1d812e-51fb-49b3-975c-ec462afb5fee is in state STARTED 2025-05-25 02:00:14.737843 | orchestrator | 2025-05-25 02:00:14 | INFO  | Task 87a65d8a-f61b-4aca-827b-a276ae86a9d4 is in state STARTED 2025-05-25 02:00:14.737871 | orchestrator | 2025-05-25 02:00:14 | INFO  | Wait 1 second(s) until the next check 2025-05-25 02:00:16.751249 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-25 02:00:16.753995 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-25 02:00:17.567733 | 2025-05-25 02:00:17.567940 | PLAY [Post output play] 2025-05-25 02:00:17.585356 | 2025-05-25 02:00:17.585532 | LOOP [stage-output : Register sources] 2025-05-25 02:00:17.649443 | 2025-05-25 02:00:17.649837 | TASK [stage-output : Check sudo] 2025-05-25 02:00:18.556784 | orchestrator | sudo: a password is required 2025-05-25 02:00:18.688399 | orchestrator | ok: Runtime: 0:00:00.015215 2025-05-25 02:00:18.703168 | 2025-05-25 02:00:18.703382 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-25 02:00:18.744446 | 2025-05-25 02:00:18.744711 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-25 02:00:18.815592 | orchestrator | ok 2025-05-25 02:00:18.825794 | 2025-05-25 02:00:18.825936 | LOOP [stage-output : Ensure target folders exist] 2025-05-25 02:00:19.309036 | orchestrator | ok: "docs" 2025-05-25 02:00:19.309468 | 2025-05-25 02:00:19.551416 | orchestrator | ok: "artifacts" 2025-05-25 02:00:19.792313 | orchestrator | ok: "logs" 2025-05-25 02:00:19.813186 | 2025-05-25 02:00:19.813379 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-25 02:00:19.852567 | 2025-05-25 02:00:19.852882 | TASK [stage-output : Make all log files readable] 2025-05-25 02:00:20.135994 | orchestrator | ok 2025-05-25 02:00:20.145463 | 2025-05-25 02:00:20.145605 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-25 02:00:20.191766 | orchestrator | skipping: Conditional result was False 2025-05-25 02:00:20.206088 | 2025-05-25 02:00:20.206264 | TASK [stage-output : Discover log files for compression] 2025-05-25 02:00:20.233503 | orchestrator | skipping: Conditional result was False 2025-05-25 02:00:20.249097 | 2025-05-25 02:00:20.249361 | LOOP [stage-output : Archive everything from logs] 2025-05-25 02:00:20.301114 | 2025-05-25 02:00:20.301386 | PLAY [Post cleanup play] 2025-05-25 02:00:20.311727 | 2025-05-25 02:00:20.311898 | TASK [Set cloud fact (Zuul deployment)] 2025-05-25 02:00:20.372611 | orchestrator | ok 2025-05-25 02:00:20.385153 | 2025-05-25 02:00:20.385352 | TASK [Set cloud fact (local deployment)] 2025-05-25 02:00:20.410537 | orchestrator | skipping: Conditional result was False 2025-05-25 02:00:20.422042 | 2025-05-25 02:00:20.422191 | TASK [Clean the cloud environment] 2025-05-25 02:00:21.025431 | orchestrator | 2025-05-25 02:00:21 - clean up servers 2025-05-25 02:00:21.884538 | orchestrator | 2025-05-25 02:00:21 - testbed-manager 2025-05-25 02:00:21.970236 | orchestrator | 2025-05-25 02:00:21 - testbed-node-0 2025-05-25 02:00:22.059712 | orchestrator | 2025-05-25 02:00:22 - testbed-node-5 2025-05-25 02:00:22.152588 | orchestrator | 2025-05-25 02:00:22 - testbed-node-3 2025-05-25 02:00:22.246558 | orchestrator | 2025-05-25 02:00:22 - testbed-node-4 2025-05-25 02:00:22.340219 | orchestrator | 2025-05-25 02:00:22 - testbed-node-1 2025-05-25 02:00:22.431962 | orchestrator | 2025-05-25 02:00:22 - testbed-node-2 2025-05-25 02:00:22.532496 | orchestrator | 2025-05-25 02:00:22 - clean up keypairs 2025-05-25 02:00:22.552288 | orchestrator | 2025-05-25 02:00:22 - testbed 2025-05-25 02:00:22.575703 | orchestrator | 2025-05-25 02:00:22 - wait for servers to be gone 2025-05-25 02:00:33.600478 | orchestrator | 2025-05-25 02:00:33 - clean up ports 2025-05-25 02:00:33.785414 | orchestrator | 2025-05-25 02:00:33 - 609e722c-e122-429d-8762-d6cbd398a358 2025-05-25 02:00:34.040775 | orchestrator | 2025-05-25 02:00:34 - 86db76b1-e0b3-4a48-ac7b-8b83e403d44b 2025-05-25 02:00:34.308643 | orchestrator | 2025-05-25 02:00:34 - 9740b160-20ae-4bf2-9c34-93e0cf302493 2025-05-25 02:00:34.571377 | orchestrator | 2025-05-25 02:00:34 - a22ab17a-a5ef-40a4-b39a-783f4c221e83 2025-05-25 02:00:34.812546 | orchestrator | 2025-05-25 02:00:34 - bf45a673-8836-44ae-86cb-21bba1a4d0d8 2025-05-25 02:00:35.043790 | orchestrator | 2025-05-25 02:00:35 - c3117f56-8399-4fef-8027-0bf293421634 2025-05-25 02:00:35.870647 | orchestrator | 2025-05-25 02:00:35 - e09d70f9-bc80-484b-a81b-255928b27bc2 2025-05-25 02:00:36.079928 | orchestrator | 2025-05-25 02:00:36 - clean up volumes 2025-05-25 02:00:36.211766 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-0-node-base 2025-05-25 02:00:36.250599 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-3-node-base 2025-05-25 02:00:36.304436 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-1-node-base 2025-05-25 02:00:36.350052 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-4-node-base 2025-05-25 02:00:36.398130 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-2-node-base 2025-05-25 02:00:36.444238 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-5-node-base 2025-05-25 02:00:36.488511 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-manager-base 2025-05-25 02:00:36.533065 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-2-node-5 2025-05-25 02:00:36.574135 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-6-node-3 2025-05-25 02:00:36.617679 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-3-node-3 2025-05-25 02:00:36.660287 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-4-node-4 2025-05-25 02:00:36.702340 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-8-node-5 2025-05-25 02:00:36.750105 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-0-node-3 2025-05-25 02:00:36.792854 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-5-node-5 2025-05-25 02:00:36.833972 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-7-node-4 2025-05-25 02:00:36.875666 | orchestrator | 2025-05-25 02:00:36 - testbed-volume-1-node-4 2025-05-25 02:00:36.916843 | orchestrator | 2025-05-25 02:00:36 - disconnect routers 2025-05-25 02:00:37.028157 | orchestrator | 2025-05-25 02:00:37 - testbed 2025-05-25 02:00:37.928883 | orchestrator | 2025-05-25 02:00:37 - clean up subnets 2025-05-25 02:00:37.967280 | orchestrator | 2025-05-25 02:00:37 - subnet-testbed-management 2025-05-25 02:00:38.174118 | orchestrator | 2025-05-25 02:00:38 - clean up networks 2025-05-25 02:00:38.348539 | orchestrator | 2025-05-25 02:00:38 - net-testbed-management 2025-05-25 02:00:38.643095 | orchestrator | 2025-05-25 02:00:38 - clean up security groups 2025-05-25 02:00:38.684687 | orchestrator | 2025-05-25 02:00:38 - testbed-node 2025-05-25 02:00:38.805958 | orchestrator | 2025-05-25 02:00:38 - testbed-management 2025-05-25 02:00:38.917181 | orchestrator | 2025-05-25 02:00:38 - clean up floating ips 2025-05-25 02:00:38.954182 | orchestrator | 2025-05-25 02:00:38 - 81.163.192.93 2025-05-25 02:00:39.770254 | orchestrator | 2025-05-25 02:00:39 - clean up routers 2025-05-25 02:00:39.910249 | orchestrator | 2025-05-25 02:00:39 - testbed 2025-05-25 02:00:40.975993 | orchestrator | ok: Runtime: 0:00:20.041791 2025-05-25 02:00:40.980546 | 2025-05-25 02:00:40.980690 | PLAY RECAP 2025-05-25 02:00:40.980830 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-25 02:00:40.980904 | 2025-05-25 02:00:41.135217 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-25 02:00:41.136258 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-25 02:00:41.906747 | 2025-05-25 02:00:41.907038 | PLAY [Cleanup play] 2025-05-25 02:00:41.931905 | 2025-05-25 02:00:41.932164 | TASK [Set cloud fact (Zuul deployment)] 2025-05-25 02:00:41.989607 | orchestrator | ok 2025-05-25 02:00:41.996960 | 2025-05-25 02:00:41.997111 | TASK [Set cloud fact (local deployment)] 2025-05-25 02:00:42.032208 | orchestrator | skipping: Conditional result was False 2025-05-25 02:00:42.050997 | 2025-05-25 02:00:42.051200 | TASK [Clean the cloud environment] 2025-05-25 02:00:43.193141 | orchestrator | 2025-05-25 02:00:43 - clean up servers 2025-05-25 02:00:43.798262 | orchestrator | 2025-05-25 02:00:43 - clean up keypairs 2025-05-25 02:00:43.814935 | orchestrator | 2025-05-25 02:00:43 - wait for servers to be gone 2025-05-25 02:00:43.854677 | orchestrator | 2025-05-25 02:00:43 - clean up ports 2025-05-25 02:00:43.933463 | orchestrator | 2025-05-25 02:00:43 - clean up volumes 2025-05-25 02:00:44.007032 | orchestrator | 2025-05-25 02:00:44 - disconnect routers 2025-05-25 02:00:44.037653 | orchestrator | 2025-05-25 02:00:44 - clean up subnets 2025-05-25 02:00:44.058117 | orchestrator | 2025-05-25 02:00:44 - clean up networks 2025-05-25 02:00:44.246587 | orchestrator | 2025-05-25 02:00:44 - clean up security groups 2025-05-25 02:00:44.284613 | orchestrator | 2025-05-25 02:00:44 - clean up floating ips 2025-05-25 02:00:44.313949 | orchestrator | 2025-05-25 02:00:44 - clean up routers 2025-05-25 02:00:44.591968 | orchestrator | ok: Runtime: 0:00:01.518086 2025-05-25 02:00:44.596524 | 2025-05-25 02:00:44.596752 | PLAY RECAP 2025-05-25 02:00:44.596890 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-25 02:00:44.596958 | 2025-05-25 02:00:44.755452 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-25 02:00:44.760889 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-25 02:00:45.545960 | 2025-05-25 02:00:45.546137 | PLAY [Base post-fetch] 2025-05-25 02:00:45.562280 | 2025-05-25 02:00:45.562440 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-25 02:00:45.620431 | orchestrator | skipping: Conditional result was False 2025-05-25 02:00:45.632837 | 2025-05-25 02:00:45.633059 | TASK [fetch-output : Set log path for single node] 2025-05-25 02:00:45.692945 | orchestrator | ok 2025-05-25 02:00:45.703735 | 2025-05-25 02:00:45.703885 | LOOP [fetch-output : Ensure local output dirs] 2025-05-25 02:00:46.222463 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/f9c00a01ee164987837fb5c6a0b88135/work/logs" 2025-05-25 02:00:46.534518 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f9c00a01ee164987837fb5c6a0b88135/work/artifacts" 2025-05-25 02:00:46.805514 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/f9c00a01ee164987837fb5c6a0b88135/work/docs" 2025-05-25 02:00:46.828714 | 2025-05-25 02:00:46.828967 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-25 02:00:47.813630 | orchestrator | changed: .d..t...... ./ 2025-05-25 02:00:47.814073 | orchestrator | changed: All items complete 2025-05-25 02:00:47.814220 | 2025-05-25 02:00:48.560167 | orchestrator | changed: .d..t...... ./ 2025-05-25 02:00:49.290406 | orchestrator | changed: .d..t...... ./ 2025-05-25 02:00:49.320231 | 2025-05-25 02:00:49.320498 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-25 02:00:49.360602 | orchestrator | skipping: Conditional result was False 2025-05-25 02:00:49.365466 | orchestrator | skipping: Conditional result was False 2025-05-25 02:00:49.383146 | 2025-05-25 02:00:49.383302 | PLAY RECAP 2025-05-25 02:00:49.383404 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-25 02:00:49.383444 | 2025-05-25 02:00:49.534498 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-25 02:00:49.537513 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-25 02:00:50.316241 | 2025-05-25 02:00:50.316481 | PLAY [Base post] 2025-05-25 02:00:50.333383 | 2025-05-25 02:00:50.333604 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-25 02:00:51.316120 | orchestrator | changed 2025-05-25 02:00:51.329073 | 2025-05-25 02:00:51.329291 | PLAY RECAP 2025-05-25 02:00:51.329425 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-25 02:00:51.329519 | 2025-05-25 02:00:51.476119 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-25 02:00:51.478538 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-25 02:00:52.318006 | 2025-05-25 02:00:52.318231 | PLAY [Base post-logs] 2025-05-25 02:00:52.330628 | 2025-05-25 02:00:52.330802 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-25 02:00:52.841673 | localhost | changed 2025-05-25 02:00:52.860334 | 2025-05-25 02:00:52.860522 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-25 02:00:52.887305 | localhost | ok 2025-05-25 02:00:52.890397 | 2025-05-25 02:00:52.890495 | TASK [Set zuul-log-path fact] 2025-05-25 02:00:52.915485 | localhost | ok 2025-05-25 02:00:52.923619 | 2025-05-25 02:00:52.923723 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-25 02:00:52.949434 | localhost | ok 2025-05-25 02:00:52.952620 | 2025-05-25 02:00:52.952721 | TASK [upload-logs : Create log directories] 2025-05-25 02:00:53.513515 | localhost | changed 2025-05-25 02:00:53.518773 | 2025-05-25 02:00:53.518987 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-25 02:00:54.124546 | localhost -> localhost | ok: Runtime: 0:00:00.011107 2025-05-25 02:00:54.139613 | 2025-05-25 02:00:54.139851 | TASK [upload-logs : Upload logs to log server] 2025-05-25 02:00:54.790772 | localhost | Output suppressed because no_log was given 2025-05-25 02:00:54.794951 | 2025-05-25 02:00:54.795120 | LOOP [upload-logs : Compress console log and json output] 2025-05-25 02:00:54.861568 | localhost | skipping: Conditional result was False 2025-05-25 02:00:54.868838 | localhost | skipping: Conditional result was False 2025-05-25 02:00:54.880732 | 2025-05-25 02:00:54.880910 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-25 02:00:54.949117 | localhost | skipping: Conditional result was False 2025-05-25 02:00:54.949778 | 2025-05-25 02:00:54.957109 | localhost | skipping: Conditional result was False 2025-05-25 02:00:54.968725 | 2025-05-25 02:00:54.968965 | LOOP [upload-logs : Upload console log and json output]